Ahead in the Clouds

Page 1 of 712345...Last »

November 17, 2017  2:31 PM

Cloud AI in the enterprise: Making the security case

Caroline Donnelly Profile: Caroline Donnelly
Artificial intelligence, cloud, Cloud Security

In this guest post, Ross Brewer, managing director and vice president for Europe, Middle East and Africa at cybersecurity software supplier LogRhythm, makes the enterprise case for using artificial intelligence (AI) in the fight against cybercrime.

Organisations face a growing number of increasingly complex and ever-evolving threats – and the most dangerous threats are often the hardest to uncover.

Take the insider threat or stolen credentials, for example. We’ve seen many high-profile attacks stem from the unauthorised use of legitimate user credentials, which can be extremely difficult to expose.

Organisations are under growing pressure to detect and mitigate threats like these as soon as they arise, and this is only going to increase when the much talked about General Data Protection Regulation (GDPR) comes into force on 25 May 2018.

Security teams have a critically important job here. They need to be able to protect company data, often without time, money or resources on their side. This means they simply cannot afford to spend time on extensive manual threat-hunting exercises or deploying and managing multiple, disparate security products.

Cloud AI for efficiency

The perimeter-based model of yesterday is insufficient for the mammoth task of protecting a company’s assets. Instead, we are starting to see a shift towards automation and the application of cloud-based Artificial Intelligence (AI), which is fast becoming critical in the fight against modern cyber threats.

In fact, a recent IDC report predicted that the AI software market would grow at a CAGR of over 39% by 2021, whilst separate research from the analyst firm stated that the future of AI requires the cloud as a foundation, with enterprise ‘cloud-first’ strategies becoming more prevalent over the same period.

The cloud is, without doubt, transforming security by enabling easy and rapid customer adoption, saving time and money, and providing companies with access to a class of AI-enabled analytics that are not otherwise technically practical or affordable to deploy on-premise.

Plug-and-play implementation lets security teams focus on their mission instead of spending valuable time implementing and maintaining a new tool.

What’s more, when deployed in the cloud, AI can benefit from collective intelligence and a broader perspective to maximise it. Imagine incorporating real-world insight into specific threats in real-time. This will advance the ability of AI-powered analytics to detect even the stealthiest or previously unknown threats more quickly, and with greater accuracy than ever before.

Using cloud AI to detect unseen threats

By combining a wide array of behavioural models to characterise shifts in how users interact with the IT environment, cloud-based AI technology is helping organisations pursue user-based threats, including signatureless and hidden threats.

Applying cloud-based AI throughout the threat lifecycle will automate and enhance entire categories of work, as well as enable increasingly faster and more effective detection of real threats. Take analytics, for example. Hackers are constantly evolving their tactics and techniques to evade existing protective and defensive measures, targeting new and existing vulnerabilities and unleashing attack methods that have never been seen before.

Cloud AI is beginning to play an important role in detecting these emerging threats. The technology is proactive and predictive, without the need for security and IT personnel to configure and tune systems, automatically learning what is normal and evolving to register even the most subtle changes in events and behaviour models that suggest a breach might be occurring.

Cloud-based AI essentially helps security analysts cut through the noise and detect serious threats earlier in their lifecycle so that they can immediately be neutralised. It provides rapid time-to-value through cloud delivery, and promises to eliminate or augment a considerable number of time-consuming manual threat detection and response exercises. This allows security teams to drive greater efficiency by focusing on the higher-value activities that require direct human touch.

November 13, 2017  7:48 AM

Cloud in healthcare: What is holding up adoption?

Caroline Donnelly Profile: Caroline Donnelly
cloud, Cloud adoption, NHS, Public sector

In this guest post, Darren Turner, general manager at healthcare-focused hosting provider Carelink, takes a look at why the healthcare sector has been slow to adopt cloud, and how NHS Digital’s burgeoning Health and Social Care Network (HSCN) could help speed things up.

With the NHS facing an estimated funding gap of £30bn by 2020, there is immense pressure on the health service to increase operational efficiency and cut costs.

Against this backdrop, one could make the argument that ramping up its use of cloud could help the NHS achieve some of these savings. However, research suggests – compared to other public sector organisations – the health service has been slow to adopt off-premise technologies.

So why has healthcare been slower than other sectors to embrace cloud?

Part of the reason can be traced back to the perceived security risks, particularly when it comes to public cloud providers, and constraints around location and sharing of Patient Identifiable Data (PID).

There is also, I think, a lack of trust in cloud performance and resilience, as well as concerns around the physical location of hardware. Furthermore, there is a shortage of the right skills, willingness to embrace the necessary cultural shift and the budget to cover the cost of migration.

Perhaps it’s the government’s cloud-first policy that presents the biggest hurdle. Introduced in May 2013, it advises all central government departments to prioritise the purchasing of cloud technologies over on-premise software and hardware during the procurement process.

Outside of central government, no mandate for adopting a cloud-first policy exists for local authorities or public sector healthcare providers, for example. Instead, they are merely “strongly advised” to adopt a similar thought process during procurements.

Even so, it could be argued that healthcare providers are feeling the pressure to put everything in the cloud or that public cloud is the only option, but that needn’t be the case. Instead, they should work with a trusted and technology agnostic supplier to identify the best solution or combination for their organisation’s specific needs.

Hybrid services for healthcare providers

While there are no hard and fast rules, at present we find a common approach among healthcare providers is to favour a hybrid architecture, where hardware servers hosting legacy or resource hungry applications are mixed with virtual machines running less intensive services on the cloud.

With a hybrid approach, organisations can realise the efficiencies of virtualisation, through increased utilisation of compute resources, while being able to more closely control the availability of those resources across the estate.

For larger estates, particularly those with high storage volumes, the cost of a private cloud platform can compare favourably to the cost of a hyperscale public cloud.

Healthcare providers should, therefore, work with their network and infrastructure supplier to explore this cost comparison to ensure they get the best value for their money.

Whether opting for private or public cloud, multi-cloud or a hybrid offering, when it comes to entrusting a supplier with an incredibly valuable and irreplaceable asset – data – healthcare providers need to be sure it’s secure.

Buyers should seek accredited suppliers with a proven track record in providing secure solutions and protecting mission critical environments, ideally in a healthcare environment.

Driving healthcare cloud adoption

The rollout of NHS Digital’s Health and Social Care Network (HSCN), the data network for health and social organisations that replaces N3, could spur cloud adoption in some parts of the health care sector, as it could make services easier to provision.

Health and social care organisations will increasingly be able to access a full range of HSCN compliant network connectivity and cloud services from one supplier, simplifying the procurement process.

With assurance that HSCN obligations and standards have been met, this will likely drive greater adoption of cloud services in the sector.

Indeed, we’re seeing cloud providers – ranging from the SME to hyperscale – setting up healthcare divisions and actively seeking suppliers to deliver HSCN connectivity.

Furthermore, HSCN could actually be the catalyst for driving cloud adoption as multi-agency collaboration develops, paving the way for healthcare organisations to deliver a more joined-up health and social care experience for the general public.


October 30, 2017  3:08 PM

Halloween horrors: When a datacentre migration takes a dark turn

Caroline Donnelly Profile: Caroline Donnelly
Cloud migration, Cloud Security, datacentre, Troubleshooting

In this guest post, Drew Nielsen, chief trust officer at cloud-based backup provider Druva, shares a scary and hairy tale about how shortfalls in any company’s upkeep and maintenance schedule can result in some nasty surprises when datacentre migration time arrives

Halloween is a time when scary stories are told and frightful fancies are shared. For IT professionals, it is no different. From stories about IT budgets getting hacked to pieces to gross violations of information security, and cautionary tales about how poor planning can lead to data disaster. Those of us who have worked in the technology sector long enough all have stories to chill the server room.

My own story involves a datacentre migration project. The CIO at the time had seen a set of servers fitted with additional blue LEDs, rather than the usual green and red ones. This seemed enough to hypnotise him into replacing a lot of ageing server infrastructure with new-fangled web server appliances. The units were bought, and the project was handed over for us to execute.

Now, some of you may already be getting a creeping sense of dread, based on not being able to specify the machines yourselves. Others of you may think that we were worrying unnecessarily and this is a fine approach to take. However, let’s continue the story…

Sizing up the datacentre migration

The migration involved moving more than 3,000 websites from the old equipment to the new infrastructure. With the amount of hardware required at the time, this was a sizeable undertaking.

Rather like redeveloping an ancient burial ground, we started by looking at the networking to connect up all these new machines. This is where the fear really started to take hold, as there were multiple networking infrastructures in place. Alongside Ethernet, there was Token Ring, HIPPI, ISDN and SPI all in place. Some of the networking and cables made up live networks, while some were cables left in place but unconnected. However, all five networks were not alone.

Like any large building, certain small mammals had seen fit to make their homes in the networking tunnels. With so much of the cabling left in place, these creatures had lived – and more importantly, died – within this cosy, warm environment. And due to the warmth, it had led to more than one sticky situation developing. What had originally been a blank slate for a great migration project rapidly became messy, slimy and convoluted. Finding skulls, dessicated husks and things in the cabling led to an aura of eldritch horror.

Our plucky team persevered and eventually put in place a new network, ready for servers to be racked. With the distinct aroma of dead rat still lingering, it was time to complete the job, but one further, scary surprise was still to come.

When the number of server appliances had been calculated, the right number of machines had been ordered, but this did not match the physical volume of machines would that fit in the racks. Consequently, it became impossible to fit the UPS devices into the racks alongside all the servers. Cue more frustration and an awful lot of curse words being uttered.

Avoiding your own datacentre migration disaster

So what can you learn from this tale of terror? Firstly, keep your datacentre planning up-to-date. This means keeping accurate lists of what you have, what you had, and how old kit was disposed of. Unless you like dealing with a tide of rodent corpses, working with facilities management to keep the whole environment clear is a must too.

Secondly, planned downtime can lead to more problems and these should be considered in your disaster recovery strategy. Assessing your data management processes in these circumstances might not be a top priority, compared to big unexpected errors or ransomware attacks, but a datacentre migration can always throw up unexpected situations.

A datacentre migration should not include points of no return early in the process. Equally, knowing how to get back to a “known good state” can be hugely valuable.

Thirdly, it can be worth looking at how much of this you do need to run yourself. The advent of public cloud and mobile working means that more data than ever is held outside the business. Your data management and disaster recovery strategy will have to evolve to keep up, or it will become a shambling zombie in its own right.


October 17, 2017  12:11 PM

Discussing DevOps: A toast to the T-shaped engineers

Caroline Donnelly Profile: Caroline Donnelly
cloud, DevOps, Digital transformation

In this guest post, Steve Lowe, CTO of student accommodation finder Student.com, introduces the concept of T-shaped engineers and the value they can bring to teams tasked with delivering DevOps-led digital transformation

In an increasingly complicated world, businesses are constantly looking at how they deliver safe, stable products to customers faster. They do this using automation to create repeatable processes , paving the way for iterative improvements throughout the software development lifecycle.

If you read articles on DevOps or agility, you will come across references to security working more closely with Dev or Ops, with Ops working more closely with Dev, or cross functional teams.

To me these are all the same thing. And the logical evolution is they will become the same team: a team of engineers with enough knowledge to solve problems end-to-end and invest themselves in the outcome. To achieve this, a cross-functional team is nice, but a team of T-shaped engineers is better.

Making the case for T-shaped engineers

It’s important to understand all of your engineers are already T-shaped. They will have different skills and experiences, including some not quite related to their primary role. This is certainly the case if they have previously worked at a startup.

Equally, if you take a look at your ‘go-to’ team members, they probably don’t just specialise in one technology, but also possess a good breadth of knowledge about other areas too.

There is a reason for this. When you have a good breadth of knowledge it allows you to solve problems in the best place, leading to simpler solutions and that are easier to support.

Now imagine instead of having a few ‘go-to’ team members, you’ve developed your whole team into T-shaped engineers. Engineers who can use a multitude of skills and technologies to solve a problem, and aren’t constrained by the idea of “this is what I know, therefore this how I’ll solve it”.

By developing T-shaped engineers, you end up with a better and more resilient team ‒ where holidays and sickness have less impact as someone can always pick up the work ‒ and (usually) an easier to maintain and manage technical solution.

Tapping up T-shaped talent

The real challenge, of course, is finding a large enough pool of T-shaped engineers. For some reason, in a world where solutions are almost always built using multiple technologies, we have developed extreme niches for our engineers.

And while that drives great depth of knowledge on a subject, most problems no longer require that depth of knowledge to solve ‒ or the only reason that depth of knowledge is required is because your engineers don’t know a simpler solution using different technology.

The only solution to this challenge that I’ve found so far is to ‘grow your own’. Find people willing to learn multiple technologies, make it the primary requirement for your hiring, encourage cross learning, and support your team with goals that reward developing a breadth of knowledge.

Scale and right size the team

But let’s suppose I want a bigger team, so I can go even faster, how do I scale when my team is cross-functional?

First, it’s important to make sure having a bigger team will actually make you go faster. If you have a small code base, more engineers might actually just get in each other’s way and you’ll have to work on better prioritisation to make the best use of your team.

Assuming you can bring in more engineers to make a difference, dividing and managing your team requires microservice architecture. If your system is engineered as a group of microservices, then you can separate your microservices into functional areas and then build teams that own functional areas.

There are several benefits around not dividing engineers by technical expertise. First, your team is still focused on an end-to-end solution. There is no more ‘Dev Complete’ ‒ it’s either ready for your customers or it needs more work.

Second, as long as you keep your interfaces between functional areas strong and backwardly compatible, different teams are free to solve their problems with the best technology for the purpose.

This gives nice separation and makes it easier to empower teams and reduce the coordination overhead. It also gives you a good answer as to how many engineers you need to run, develop and maintain your product.

They are simply a function of the number of functional areas that make sense for your product. For the smallest number of engineers, it’s the minimum number of functional areas that makes sense (usually one).

If it’s a question of going faster and how many engineers do you need, it’s simply a question of efficient team size and how many sensible functional areas you can divide your software into.

Why the timing is right

To understand why the timing is now right, and why this wasn’t always good practice, we need to step back and look at the overall picture.

Cloud services and APIs are everywhere, and lots of us are now integrated with a number of services. Infrastructure as Code is a reality and open source is now widely accepted as a must-have building block in most technology situations.

This combination means, to build a modern piece of software, depth of knowledge of any given technology is less important than breadth of knowledge in most cases.

Lots of the complexity is abstracted away, and building most of your solution will involve integrating a mix of open source and third-party services in new and interesting ways to solve a specific problem.

This allows engineers to focus on building a breadth of knowledge rather than a deep focus on one specific technology, so they can solve your problems efficiently.


June 27, 2017  3:28 PM

Datacentre managers on how to address the shortage of women and young people entering the industry

Caroline Donnelly Profile: Caroline Donnelly
"tech skills", datacentre, Diversity, inclusion

CNet Training and Anglia Ruskin University invited Ahead in the Clouds along to their recent boot camp to lead a two-way discussion with its Datacentre Leadership and Management students about how to address the industry’s skills gap. Here’s all you need to know about what got discussed.

Datacentre managers are facing an uphill struggle when it comes to filling engineering roles, and a unified, industry-wide approach to closing the skills gap is urgently required.

That was one of a number of views put forward during a panel session Ahead in the Clouds (AitC) spoke at on Friday 16 June, as part of Anglia Ruskin University’s and CNet Training’s annual Datacentre Leadership and Management boot camp.

The boot camp gives people taking part in the university’s three-year, distance learning course in Datacentre Leadership and Management an opportunity to meet in person, share best practice and collaborate, as they work towards completing their Master’s degree.

The course itself is pitched at people already working in the industry, and is designed to give them access to “essential new learning, centred around leadership and management within a datacentre environment,” as the course brochure puts it.

The organisers claim the course is the first and only qualification of its kind in the world, with representatives from Capital One, Digital Realty and Unilever among the list of participants.

The panel session AitC participated in touched on the recruitment challenges senior leaders face in the datacentre industry, particularly when drawing in people from diverse backgrounds. A topic this blog has covered a number of times before.

You can’t be what you can’t see

One of the biggest blockers to recruitment is that, outside of the IT world, the general population has little understanding or awareness of what datacentres are or the important role they play in keeping our digital economy ticking over, AitC argued at the event.

And so it follows, if people don’t know the industry exists, they won’t be aware of the wide range of potential job opportunities datacentres give rise to.

There is also the fact that, as one of the course participants pointed out during the Q&A part of the session, a lot of the people who work in datacentres today “fell into” the industry. Few – if any – deliberately set-out to work in it.

For this reason, you often find little in the way of consistency between the career paths of datacentre managers, for example, making it difficult for people to follow in their footsteps.

Establishing a consistent, coherent and repeatable career path for new entrants joining the industry might help here, it was suggested.

Age-old questions

The conversation also touched upon how best to market datacentre career opportunities to young people, and at what age this should begin?

Dr. Theresa Simpkin, a senior lecturer in leadership and corporate education at Anglia Ruskin University and the session moderator, said targeting undergraduates (and those on the cusp of leaving university) would be a tough sell.

Because, as she put it, many people at this point in their lives have already decided what direction they want their future career to take, and convincing them to junk those ideas and do something completely different would be hard.

This is why there are so many STEM initiatives aimed at encouraging school kids to take an interest in technology, and adopting this approach for datacentre-related careers could be the way to go.

School children, however, are likely to be just as clueless as most grown-ups (and their teachers) about what datacentres are, so this is an area the industry should be looking to take the lead, by engaging with local schools and setting up out-reach programmes.

Given how many of the big colocation companies have facilities located within industrial parks that neighbour residential areas, getting out into the local community to talk about what their companies do isn’t really a big ask, when you think about it.

They don’t necessarily need to share the exact coordinates of where their facilities are, but opening up about the contribution their companies make to keeping our digital economy ticking over would go some way to demystifying the datacentre industry.

Peppering that conversation with details of how these sites prop up the YouTube channels kids watch or the messages they send via Snapchat may just help make the concept of datacentres a little more relatable too.

Datacentre red tape stops play

Another course participant talked about how an attempt to hold an open day at their datacentre for local school children was quashed on health and safety grounds, putting paid to their attempt to lift the veil of secrecy the industry operates under.

A modern datacentre, with its iris scanners, man traps, and server rooms packed with space-age looking equipment, are exactly the kind of things kids need to see to get excited about technology, they argued, but it was not to be.

The discussion touched upon the issue of what needs to be done to make the datacentre industry an appealing employment proposition for women. And – for the ones working it in already – what must be done to make them stay .

On the latter point, the importance of creating empathetic working environments was touched upon. In any male-dominated company, there is always a risk the needs of female employees in some workplaces maybe neglected or overlooked.

Industry-wide problem solving

It quickly became apparent the people taking part in the bootcamp are keen to do what they can to address the datacentre industry’s skills shortage, but where should they be focusing their efforts and support?

There is no single body (made up of stakeholders representing the interests and views of the whole datacentre industry) working on this at present.

What we do have is some standalone organisations and a few groups (some more formalised than others) working separately to address these issues, and (in many cases) would very much prefer to stay that way. For competitive reasons, predominantly, a couple of the course participants told AitC privately.

Working in this closed-off way to address the datacentre skills gaps seems daft, for want of a better word, given how many examples there are of how taking an open, collaborative and group approach to solving problems has the potential to speed up progress when it comes to solving them.

An Open Compute Project for people?

As Riccardo Degli Effetti, head of datacentre operations at television broadcaster Sky, so eloquently put it during the session: what the industry needs to do is create an “Open Compute Project (OCP) for people” to tackle the skills shortage.

In OCP, you see industry foes set aside their differences to collaborate on the creation of open source server designs because they know they would struggle to achieve the same rate of innovation if they tried to go it alone.

The rest of the industry needs to apply the same kind of logic to closing the skills gap, because one group can’t do it all alone.

After all, it’s an industry-wide problem that needs solving, so it makes sense to take an industry-wide approach to tackling it.


June 8, 2017  9:24 AM

Enterprise DevOps: Is it anything new?

Caroline Donnelly Profile: Caroline Donnelly
Agile, Agile software development, Continuous delivery, DevOps

In this guest post, Jon Topper, co-founder and principal consultant of  hosted infrastructure provider The Scale Factory, shares his thoughts on how the DevOps movement is maturing as enterprises move to adopt it.

In 2009, a group of like-minded systems administrators gathered in Gent, Belgium to discuss how they could apply the principles of agile software development to their infrastructure and operations work, and – in the process – created the concept of “DevOps”.

Over the course of the intervening years, DevOps has become a global movement, and established itself as a culture of practice intended to align the values of both developers and operations teams around delivering value to customers quickly and with a high level of quality.

The DevOps community understands that, although software and tools play a part in this, that doing DevOps successfully is often much more about people than technology.

The emergence of enterprise DevOps

Perhaps inevitably, the term “Enterprise DevOps” has recently emerged in its wake, with new conferences and meetup groups springing up under this banner, with consultancies, recruitment agencies, and software vendors all rushing to refer to what they do as Enterprise DevOps too.

Many original DevOps practitioners are sceptical of this trend, seeing it as the co-opting of their community movement by mercenary, sales-led organisations, and some of their scepticism is warranted.

The newcomers, in some cases, have shown themselves to be tone-deaf and have missed the point of the movement entirely. Some more established organisations have just slapped a “DevOps” label on their existing offerings, and show up to meetings in polo shirts instead of suits.

Different challenges at scale

As companies grow, new challenges arise at different scale points. Enterprises with thousands of employees are vast organisms, whose shape has been informed by years of adaptation to changing business environments.

In an SME, it is reasonably to assume the whole technology team knows each other. In enterprises, however, there are likely to be hundreds or thousands of individual contributors, across several offices and in different time zones. A DevOps transformation needs to facilitate better communication between these teams, which can sometimes require reorganisation.

Over time, large businesses accrue layers of process and scar tissue around previous organisational mistakes. These processes govern procurement practices, change control and security practice, and can be incompatible with a modern DevOps and agile mind-set.

A successful DevOps transformation necessitates the questioning and dismantling of those processes where they are no longer adding value.

How is Enterprise DevOps different?

In all honesty, I’m not sold on the idea Enterprise DevOps is an entirely unique discipline. At least not from an individual contributor perspective. Much of the same mindset and culture of practice is just as relevant for enterprise teams as they are in smaller businesses.

To allow these contributors to succeed in the context of a larger enterprise, substantial structural and process changes are required. Whether the act of making this change is something unique, or just the latest application of organisational change is up for debate, but the term seems to be here to stay.

How to succeed with Enterprise DevOps

Although Enterprise DevOps is a recent addition to the lexicon, some larger businesses have been doing DevOps for years now, and those that have been successful in their transformations have a number of things in common.

One major success factor is that there’s a high level executive in place who’s championing this sort of work. A powerful, trusting, business sponsor can be crucial in removing obstacles, and for ensuring transformations are provided with the resources they need.

Successful organisations seem to reorganise by building cross-functional teams aligned to a single product. These teams include project and product management, developers, ops team members, QA and others. They’re jointly responsible for building and operating their software. It should come as no surprise to learn these teams look like miniature start-up businesses inside a wider organisation.

Crucial to the success of these teams is a culture of collaboration and sharing. There’s little point in having multiple teams all trying to do the same thing in myriad different ways, or in all making the same mistakes. Successful teams in organisations like ITV and Hiscox have described their experiences of building a “common platform”. Design patterns and code are shared and reused between teams allowing them to build new platforms quickly, and at a high standard.

The cost of business transformation can be high, but now DevOps is proven to work in the enterprise, can you really afford not to make this change?


May 24, 2017  12:00 PM

All eyes on Etsy as investors push public cloud to cut costs

Caroline Donnelly Profile: Caroline Donnelly
Cloud Computing, DevOps, Etsy, Private Cloud, Public Cloud, Strategy

Online marketplace Etsy is under growing investor pressure, following a drop in share price, to cut costs. But will embracing public cloud do the trick?

Etsy is one of a handful of household names enterprises regularly name-check whenever you quiz them about where they drew inspiration from for their DevOps adoption strategy.

The peer-to-peer online marketplace’s engineers have been doing DevOps for more than half a decade, and are well-versed in what it takes to continuously deliver code, while causing minimal disruption to the website and mobile apps they’re trying to iteratively improve.

According to the company’s Code as Craft blog, Etsy engineers are responsible for delivering up to 50 code deploys a day, allowing the organisation to respond rapidly to user requests for new features, or improve the functionality of existing ones.

The fact the company achieves such a ferocious rate of code deploys per day, while running its infrastructure in on-premise datacentres, has marked it out as something of an anomaly within the roll-call of DevOps success stories.

For many enterprise CIOs, DevOps and cloud are intrinsically linked, while the Etsy approach proves it is possible to do the former successfully without the latter.

Investor interference at Etsy

One company investor, Black & White Capital (B&WC), seems less impressed with what Etsy has achieved through its continued use of on-premise technology, though, and has publicly called on its board of directors to start using public cloud instead.

B&WC, who owns a 2% stake in Etsy, made the demand (along with a series of others) in a press statement on 2 May 2017. Its aim being to draw public attention to the decline in shareholder value the company has experienced since it went public in April 2015.

According to B&WC, Etsy shares have lost 33% of their value since the IPO, while firms in the same category – listed on the NASDAQ Internet Index and the S&P North American Technology Sector Indexes – have seen the average price of their shares rise by 38% and 35%, respectively.

Given that Etsy is reportedly the 51st most popular website within the United States, and features around 45 million unique items for sale, B&WC’s argues the firm should be making more money and returning greater value to shareholders than it currently does.

Part of the problem, claims B&WC’s chief investment officer, Seth Wunder, is the lack of “expense management” and “ill-advised spending” going on at Etsy.

“This has allowed general and administrative expenses to swell to a figure that is more than 55% higher than what peers have spent historically to support a similar level of GMS [gross merchandise sales],” said Wunder, in the press statement.

“The company’s historical pattern of ill-advised spending has completely obfuscated the extremely attractive underlying marketplace business model, which should produce incremental EBITDA margins of greater than 50% with low capital investment requirements.”

Room for improvement?

The statement goes on to list a number of areas of operational improvement, compiled from feedback supplied by product engineers and former employees, that could potentially help cut Etsy’s overall running costs and drive sales.

These include reworking the site’s search optimisation protocols, and introducing marketing features that would open up repeat and cross-sale opportunities for people who use the site.

The press statement also goes on to make the case for freeing up Etsy’s research and development teams by using cloud, because they currently spend too much time keeping the firm’s on-premise infrastructure up and running.

“It is Black & White’s understanding that more than 50% of the approximately 450 people on the R&D team focus on maintaining the company’s costly internal infrastructure,” the statement continued. “A shift to the public cloud would provide long-term cost savings while also establishing a more flexible infrastructure to support future growth.”

Private vs. public cloud costings

Apart from the quotes above, there is no further detail to be found within the private letters B&WC sent to Etsy’s senior management team (and subsequently made public) about why it feels the firm would be better off in the public cloud.

If the investors’ main driver for doing so is simply to cut costs, it should not be assumed that shifting Etsy’s infrastructure off-premise will deliver the level of savings they are hoping for.

Indeed, there is growing evidence that it actually works out cheaper for some organisations to run an on-premise private cloud rather than a public cloud. Particularly ones whose traffic patterns are relatively easy to predict and are rarely subject to unexpected spikes… as is (reportedly) the case with Etsy.

When Computer Weekly quizzed Etsy previously on the rationale behind its use of on-premise datacentres, the company said – based on the predictability of its traffic usage patterns – it makes commercial sense to keep thing on-premise rather than use cloud.

It is difficult to say with any conviction what changes may have occurred at Etsy since then to cause its investors to reach a different conclusion, aside from their new-found commitment to cost-cutting.

The October 2016 edition of 451 Research’s Cloud Price Index report, meanwhile, suggests the investors could end up worse off by blindly favouring the use of public cloud over an on-premise, private cloud deployment.

“Commercial private cloud offerings currently offer a lower TCO when labour efficiency is below 400 virtual machines managed per engineer,” the analyst house said, in a blog post. “Past this tipping point, all private cloud options are cheaper than both public cloud and managed private cloud options.”

Take caution with cloud

There is no guarantee moving to public cloud will equate to long-term costs savings, and here’s hoping Etsy’s investors are not labouring under the misguided view that it will.

Within the DevOps community there are also plenty of war stories about how a change in senior management or investor priorities has resulted in companies abandoning their quest to achieve a sustainable and efficient continuous delivery pipeline, negatively affecting their digital delivery ambitions.

Given the investor’s agenda for change at Etsy is heavily weighted towards harnessing new technologies, as well as equipping its board of directors with more technology-minded folks, it would be a shame if that were to come at the expense of its position as one of the poster childs for DevOps done right.


May 15, 2017  1:55 PM

How OpenStack is coming of age and tackling its growing pains head-on

Caroline Donnelly Profile: Caroline Donnelly
IT trends, OpenStack, Private Cloud, Public Cloud, trends

The OpenStack community of users and contributors are united in their belief in the enterprise-readiness of the open source platform, but accelerating its adoption requires some back-to-basics thinking, finds Ahead In the Clouds.

One of the downsides of growing up is the gradual realisation that, despite our parents’ promises to the contrary, it is actually really hard to be anything we want to be.

You only have to look at how the lofty career ambitions of most small children get revised down with age, as they realise that becoming an astronaut (for example) is actually quite the academic undertaking, and that vacancies in the field are few and far between.

OpenStack appears to be going through a very similar journey of self-discovery and realisation, with many of the discussions at its recent user summit focusing on the need for its community to pare back their ambitions for the open source cloud platform.

This should not be interpreted as push by the OpenStack Foundation, who oversee the development of the software, to curb the creativity of its contributors, but to ensure they’re working on meaningful projects and features that users actually want.

Clearing the cloud complexity

OpenStack started out with the goal of allowing enterprise IT departments to use its open source software  to manage the pools of compute, storage, and networking resources within their datacentres to build out their private cloud capabilities.

In the six or so years since OpenStack has been going, the community contributing code to the platform has changed massively, with some high-profile vendors down-sizing their involvement (while others have ramped up theirs). Meanwhile, the number of add-ons and features the technology can offer enterprises has ballooned.

This has created a lot of unnecessary complexity for IT directors trying to work out if OpenStack is the right technology to create a private cloud for their business. In turn, they also need to work out which vendor distribution is right for them, what add-ons to include and whether they should do everything themselves or enlist the help of a managed services provider.

As enterprise adoption of the platform has grown, the OpenStack Foundation and its stakeholders now have a wider pool of users to glean insights from, with regard to what features they do and don’t use, which is helping cut some of this complexity.

As such, the Foundation set out plans, during the opening keynote of the Spring 2017 OpenStack Summit to start culling projects and removing unpopular features from the platform to make OpenStack easier to use, and ensure the 44% year-on-year deployment growth it’s reported recently continues apace.

Back-to-basics with OpenStack

Various stakeholders Ahead in the Clouds (AitC) spoke to at the OpenStack Summit said the Foundation’s commitment to getting “back-to-basics” is long overdue, with Canonical founder, Mark Shuttleworth, amongst the staunchest of supporters for this plan.

“We’ve always been seen somewhat contrary inside of the OpenStack Community because when everyone else was saying we should do everything, I was saying we should just do the [core] things and do them well,” he told AitC.

For Canonical, whose Ubuntu OpenStack is used by some of the world’s biggest telcos, media outlets, and financial institutions, this means delivering virtual machines, disks and network, on demand with great economics, said Shuttleworth.

“OpenStack does not need to be everything as a service. It just needs to be infrastructure as a service. Focusing just on that has been very successful for Canonical,” he continued.

“We focused on just the core, and that allows people to consume [the] OpenStack private cloud as cleanly as they can consume public cloud.”

As far as Shuttleworth is concerned, scaling down OpenStack’s ambition is a sign of the endeavour’s growing maturity, and will position the private cloud technology well for future growth.

“They say a mid-life crisis is all about realising you’re not going to be simultaneously a rock star, a noble prize winner, a top surfer and a famous poet. That is exactly what is happening with OpenStack,” he said.

“It’s not a bad thing. You can call me OpenStack’s greatest fan, but it’s just I happen to think chunks of it are bulls**t.”

OpenStack comes of age

At just six years old, OpenStack is – perhaps – a little premature to be going through a mid-life crisis, but it is certainly at the right stage of life to be experiencing growing pains, as its quest to become the private cloud of choice for enterprise customers rumbles on.

Its success here heavily relies on the ability of the Foundation and its stakeholders to alter some of the negative perceptions enterprises have about private cloud.

Some of these are borne out of end-users unfairly pitting the private cloud and public cloud against each other, Scott Crenshaw, senior vice president and general manager of OpenStack clouds at managed cloud firm Rackspace, told AitC at the Summit.

“We have enough experience in the industry to understand now – at a high level – what platform works best for each application. It’s not the Wild West any more where fear, uncertainty and doubt are driving buying decisions,” he said.

“There is no longer a Pollyanna view that everything goes into public cloud and then we’re all better off. The situation is lot more nuanced than that.”

Enterprises are gradually coming to the realisation that drawing on public cloud resources to run their applications and workloads is not necessarily cheaper, and – in some situations – can actually work out a whole lot more expensive.

“If you knit all this together, what you see is private cloud and public cloud are not really in competition: they’re very complimentary technologies,” he continued.

“It’s going to be horses for courses and it’s a great thing. It’s more economical, it’s more efficient, it’ good for the economy and for the end users.”

OpenStack’s growing pains have seen some of its big-name contributors downsize or simply tweak their involvement with the community, as their bets on the technology have not quite played out how they predicted.

Shuttleworth said this process is something all open source communities go through as they mature and evolve, and OpenStack will end up stronger for it.

“It’s like the internet in 1999. The internet didn’t stop once the dotcom bubble burst, and the same applies here. The need for OpenStack continues and is bigger than ever,” he added.

The latest OpenStack User Survey (published in April 2017) certainly backs the latter point, and it will be interesting to see how the Foundation’s efforts to trim the fat from OpenStack affects the deployment rates reported in next year’s edition and beyond.


April 21, 2017  1:46 PM

Desktop virtualisation dilemmas: Solving the VDI blame game

Caroline Donnelly Profile: Caroline Donnelly
desktop virtualisation, Liquidware Labs, VDI

In this guest post, Kevin Cooke, product director at desktop virtualisation software provider Liquidware Labs, explains how CIOs and IT departments can avoid playing the blame game when working out why their VDI projects are not going to plan.

The move to virtual desktops, whether full on-premise virtual desktop infrastructure (VDI) or a managed desktop as a service (DaaS) in the cloud, can be wrought with hidden challenges. They may be technical or political, and lead to disruption, unmet user expectations and hit staff productivity.

These challenges or visibility gaps are amplified in larger environments, as there are more fingers in the pie, often combined with distributed technical responsibilities.
Ultimately, the question CIOs and IT directors should be asking is who owns accountability for the user experience?

What good looks like

If delivered properly, the desktop or workspace should offer a consistent and familiar experience—regardless of whether it is delivered via physical PCs, virtualised locally or delivered as a service in the cloud. But who gets the light shined on them when things go astray? Is it the desktop team? Perhaps the infrastructure folks who own the storage, servers and network are to blame? And in the case of DaaS, this demarcation becomes a lot more imprecise.

Don’t play the VDI blame game

The frustration we hear time and time again is who’s at fault. If VDI or DaaS is the last technology employed, it often gets the blame. And don’t discount people or organisational challenges; whereby user rebellion or office politics can be at play.

The lack of visibility and understanding of user experience can occur regardless of the delivery approach or platform. For cloud and managed services, there are issues that centre around where lines of accountability should be drawn. And, without a specific user experience SLA, it can be almost impossible to ensure you can measure, enforce and remediate these issues—even if you could draw appropriate lines between IT teams and find the true root cause.

Environmental challenges

While these challenges are not unique to DaaS, they do muddy the waters when attempting to determine accountability. How do you navigate these issues when your team points the finger at the service provider and the cloud folks claim it’s not their issue? I’ll present a number of common challenges we routinely face in the field. Some are related to infrastructure and delivery. Some are simply good practice and tasks that should be applied to any desktop.

In no particular order, I present a list of common visibility challenges that can play a significant role in user experience.

  • Desktops are like cupboards: If you don’t clean them out once and a while they become wildly inefficient, so be sure to reboot your physical, persistent and non-persistent pools, people.
  • We’ve always done it that way: Stop using old-school approaches to managing desktop patches on new-school architectures like DaaS, such as Microsoft System Center Configuration Manager (SCCM). I understand it’s the way you’ve always done it, but that does not make it correct.
  • User tiering and memory allocation: When moving to DaaS, or moving a physical PC to VDI, it is critical that you understand metrics such as memory and what is consumed by users and user groups. On the one hand you could under-provision, placing power users into a smaller memory footprint than required. These users will never be happy, as their VMs will constantly page to disk. On the other hand, over-provisioning means wasted resources. Resulting in an elusive ROI that will never be realised as you are over paying for VMs.
  • Controlling video, audio, keyboard and mouse signals correctly:  Poor user experience can sometimes be traced backed to the display protocol. Understanding the network, and how your display protocol behaves when constrained is key to tuning and optimising its performance. Would you be surprised to learn it was your wide area network provider that was to blame.
  • Master desktop image is everything: Pushing the same image to everyone – regardless of what they consume – is just plain wasteful. We worked at length with a customer who had not included the proper PCoIP components in their base image. The images installed and all seemed well on the DaaS platform, but the desktops were not accessible from their thin clients. Understanding what your users need, and building the appropriate image, is very important.

In addition to understanding what users need, CIOs need to get a handle on when they need it. In the spirit of cloud, concurrency and resource utilisation, it is important to understand when users require their workspaces.

Being able to identify workers who are not using their workspace is one side of this exercise, but right-sizing your pool and workspace count is another – whether you are paying by the month, by the CPU cycle, or by the named user. Understanding your actual consumption is key to maximising your ROI.

Could you be the problem?

I often hear seasoned IT organisations moan about VDI and DaaS. How it does not work, or it’s not delivering on all promises. I bite my tongue, and simply say VDI may not be to blame.

Many of the issues and challenges noted above are key contributors to a poorly performing DaaS or on-prem VDI platform. With vast experience in helping to diagnose and remediate these complex environments, I can honestly say is not often that the core VMware, Citrix or DaaS platform that is to blame.

More often than not, it is a tangential issue that is the cause. Similarly, before you point the finger at your cloud provider, be sure to understand the contributing and supporting components and how they can affect your overall user experience.


March 30, 2017  3:40 PM

Lock-in: Using cloud-neutral technology to avoid it

Caroline Donnelly Profile: Caroline Donnelly
Amazon DynamoDB, cloud, Google Cloud, lock-in, MarkLogic

In this guest post, Gary Bloom, CEO of database software supplier MarkLogic, explains why adopting a cloud-neutral strategy is essential for enterprises to avoid lock-in.

Not so long ago, choosing a single cloud provider seemed a sensible approach. But as the market has matured, enterprises are realising how easy it is to become locked in to a single provider, and the downsides that can bring.

According to market watcher 451 Research, some organisations are mitigating the risk of vendor lock-in by adopting the operating principle of ‘AWS+1’.

The analyst firm believes 2017 will be the year CIOs move to adopt cloud services from Amazon Web Services (AWS) and one other competing provider to ensure they are not locked into a single supplier or location.

This approach offers greater flexibility to match applications, workloads and service requests to their optimal IT configurations.

But even playing two providers off each other might not allay the risk of lock-in, as Snap’s public filing revealed in February this year.

The company, which owns of the messaging app Snapchat, plans to spend an eye-watering $2 billion on Google Cloud, its existing provider, as well as a further $1 billion on AWS over the next five years.

In its regulatory filing, Snap exposes the real impact of changing cloud providers with words that should send shivers down the spine of any CIO.

“Any transition of the cloud services currently provided by Google Cloud to another cloud provider would be difficult to implement and will cause us to incur significant time and expense,” the document states.

“If our users or partners are not able to access Snapchat through Google Cloud or encounter difficulties in doing so, we may lose users, partners or advertising revenue.”

This alarming note demonstrates the risk of lock-in even with two cloud providers. Snap is also considering building its own cloud infrastructure but this presents similar risks.

The creep of cloud lock-in

IT departments may be wary of lock-in, but it can happen without them realising it. It starts as soon as someone decides to use the cloud provider’s proprietary APIs to reduce the amount of coding required to launch an application in the cloud.

For example, when developers write software for Amazon’s DynamoDB database they are being locked into AWS.

Although it is unlikely that you will want to move workloads back and forth between clouds, there is a high chance you will want to deploy it in another cloud at some point. At this point, it can prove a lengthy and expensive process to rewrite the application.

The solution is to design cloud applications with cloud neutrality at their core, underpinned by cloud-neutral database technology that works across every cloud provider and on-premise.

With a cloud-neutral approach, you are not tying your fortunes to those of your cloud provider, and you are also able to play vendors off against each other to get a better deal.

In a sign that cloud neutrality is coming of age, SAP recently announced that it is making its HANA in-memory database available across all the major public cloud platforms, as well as its own private cloud, to give its customers the opportunity to switch providers.

Other enterprise ISVs are likely to follow suit in time, but none of us can predict how the cloud market will evolve long-term and cloud neutrality keeps the door open. So, when an alternative vendor launches a new service or specialty that is more suited to your needs, you can switch with relative ease.

Being cloud neutral is an effective insurance policy in an age when it seems that nobody is immune from the threat of cyber attacks. If your cloud provider has a breach, you want to be able to move to another provider quickly.

CIOs have not forgotten the dark days when datacentre outsourcers held enterprises to ransom and nobody would willingly make the same mistake in the cloud. With cloud neutrality at the heart of your strategy, you can have the peace of mind that comes from future proofing your business.


Page 1 of 712345...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: