Ahead in the Clouds


March 28, 2018  3:17 PM

Better cloud management: the Ops team vs. machines

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Steve Lowe, CTO of student accommodation finder Student.com, weighs up the risks and benefits of relying on machines to carry out Ops jobs within cloud infrastructures.

The “if it ain’t broke, don’t fix it” mantra is firmly ingrained in the minds of most engineers, and – in practice – it works when your resource is fixed and there is no upside to scaling down.

However, when you are paying by the hour (as is often the case with cloud) or the second, it can make a huge difference.

Assembling a team of engineers that are able to make quick, risk-based decisions using all the information at their disposal is not an easy feat at all.

Worryingly, within modern-day teams, there is often a tendency to think things will work themselves out in a few minutes. Or, if something is working fine, the best approach is to let it run and be safe than scale it down.

People vs. machines

In a high-pressure situation, even some of the best decision makers and quickest thinkers can be hard-pressed to come up with a viable solution within the required period of time.

This is where the emotionless machine wins hands down every time. Wider ranges of data sets can be joined together and analysed by machines that can use that data to make scaling decisions.

On top of that, these decisions are made in a fraction of the time a team of engineers would use. As such, if you tell your machine to follow your workloads, it will do just that.

Another benefit of relying on emotionless machines is their automation and reliability, meaning workloads can be easily repeated and delivered consistently to the required standard.

Here comes Kubernetes

As enticing as all this may sound, especially to organisations wanting to scale-up, the devil is in the implementation. For a number of years, a significant amount of work was required to implement auto-scaling clusters in a sensible way. This was especially the case with more complex systems.

The solution? Kubernetes. As long as your software is already running in containers, Kubernetes makes implementation much simpler and smoother. And autoscaling a Kubernetes cluster based on cluster metrics is a relatively straightforward task.

Kubernetes takes care of keeping services alive and load balancing the containers across the compute environment. And, finally, enterprises get the elastically-scaling compute resource they always dreamed of.

What to do with the Ops crew?

Now the machines have helped the organisation free up all this time, the question of what to do with the people in the Ops team now needs answering.

Well, with cyber attacks increasing in both number and sophistication, there’s never been a better time to move free resources into security. With a little training, the Ops team are perfectly positioned to focus on making sure systems are secure and robust enough to withstand attacks.

With the addition of Ops people with practical hands-on experience, the organisation will be better positioned to tackle future problems. From maintaining and testing patches to running penetration testing, the Ops people will add direct value to your company by keeping it safe, while the machines take care of the rest.

March 23, 2018  11:31 AM

Cloud computing: Past, present and future

Caroline Donnelly Profile: Caroline Donnelly
adoption, cloud, Virtualisation

In this guest post, Susan Bowen, vice president and general manager at managed service provider Cogeco Peer 1, takes a look at the history of cloud computing, and where enterprises are going with the technology.

The late 1960s were optimistic days for technology advancements, reflected headily in the release of Stanley Kubrick’s iconic 2001: A Space Odyssey and its infamously intelligent computer H.A.L. 9000.

Around this time the concept of an intergalactic computing network was first mooted by a certain J.C.R. Licklider, who headed up the Information Processing Techniques Office at the Pentagon’s Advanced Research Projects Agency (ARPA).

Charged with putting the US on the front foot, technologically, against the Soviets, ARPA was a hotbed of forward-thinking that pushed technology developments well beyond their existing limits.

Within this context, Licklider envisioned a global network that connected governments, institutions, corporations and individuals. He foresaw a future in which programs and data could be accessed from anywhere. Sounds a bit like cloud computing, doesn’t it?

This vision actually inspired the development of ARPANET, an early packet switching network, and the first to implement the protocol suite TCP/IP. In short, this was the technical foundation of the internet as we now know it.

It also laid the foundation for grid computing, an early forerunner of cloud, which linked together geographically dispersed computers to create a loosely coupled network. In turn this led to the development of utility computing which is closer to what Licklider originally envisioned.

It’s also closer to what we think of as the cloud today, with a service provider owning, operating and managing the computing infrastructure and resources, which are made available to users on an on-demand, pay-as-you-go basis.

From virtualisation to cloud

Dovetailing with this is the development of virtualisation which, along with increased bandwidth, has been a significant driving force for cloud computing.

In fact, virtualisation ultimately led to the emergence of operations like Salesforce.com in 1999, which pioneered the concept of delivering enterprise applications via a simple website.

These services firm paved the way for a wide range of software firms to deliver applications over the internet. But the emergence of Amazon Web Services in 2002 really set the cloud ball rolling, with its cloud-based services including storage and compute.

The launch of Amazon’s Elastic Compute cloud several years later was the first widely accessible cloud computing infrastructure service, allowing small companies to rent computers to run their own computer applications on.

It was a god send for many small businesses and start-ups helping them eschew costly in-house infrastructures and get to market quickly.

When cloud is not the answer

Of course, cloud went through a ‘hype’ phase and certainly there were unrealistic expectations and a tendency to see the cloud as a panacea that could solve everything.

This inevitably led to poor decision making when crafting a cloud adoption strategy, often characterised by a lack of understanding and under investment.

This led a few enterprises adopt an “all-in” approach to cloud, only to pull back as their migration plans progress, with few achieving their original objectives.

Today expectations are tempered by reality and in a sense the cloud has been demystified and its potential better understood.

For instance business discussions have moved from price-points to business outcomes.

Enterprises are also looking at the cloud on an application by application basis, considering what works best as well as assessing performance benchmarks, network requirements, their appetite for risk and whether cloud suppliers are  their competitors or not.

In short they understand the agility and flexibility of the cloud definitely makes it a powerful business tool, but they want to understand the nut and bolts too.

The evolution of cloud computing

Nearly every major enterprise has virtualised its server estate, which is a big driver for cloud adoption.

At the same time the cloud business model has evolved into three categories, each with varying entry costs and advantages: infrastructure-as-a-service, software-as-a-service, and platform-as-a-service. Within this context private cloud, colocation and hosted services remain firmly part of the mix today.

While cloud has its benefits, enterprises are running into issues, as their migrations continue.

For instance, the high cost of some software-as-a-service offerings has caught some enterprises out, while the risk of lock-in with infrastructure-as-a-service providers is also a concern for some. As the first flush of cloud computing dissipates, enterprises are becoming more cautious about being tied into a service.

The scene is now set for the future of cloud, with a focus on hybrid and multi-cloud deployments. In five years’ time, terminology like hybrid-IT, multi cloud and managed services, will a thing of the past.

It will be a given, and there won’t just be one cloud solution, or one data solution, it will be about how clouds connect and about maximising how networks are used.

Hybrid solutions should mean that workloads will automatically move to the most optimised and cost-effective environment, based on performance needs, security, data residency, application workload characteristics, end-user demand and traffic.

This could be in the public cloud, or private cloud, or on-premise, or a mix of all. It could be an internal corporate network or the public internet, but end-users they won’t know and they don’t care either.

There is an application architecture refactoring that is required to make this happen with existing architectures shifting from classic three-tiers to event driven.

Cloud providers are already pushing for these features to be widely adopted for cloud-native applications. And as enterprises evolve to adapt, hyperscale public clouds, and service providers are adapting to reach back into on-premise data centres.

This trend will continue over the next five to ten years with the consumption nature of the cloud growing in uptake.

While we wouldn’t call this the intergalactic computing network as ARPA’s Licklider originally envisioned, it’s certainly moving closer to a global network that will connect governments, institutions, corporations and individuals.


March 19, 2018  12:05 PM

Data-sharing and cloud: A big data match made in heaven

Caroline Donnelly Profile: Caroline Donnelly
Big Data, cloud, Collaboration, Data sharing

In this guest post, Thibaut Ceyrolle, vice president for Europe, Middle-East and Africa (EMEA) at data warehousing startup Snowflake Computing, makes the business case for using cloud to boost data-sharing within enterprises and beyond.

The phrase, ‘data is the new oil’, continues to hold true for many organisations, as the promise of big data gives way to reality, paving the way for companies to glean valuable insights about their products, services, and customers.

According to Wikibon, the global big data market will grow from $18.3bn in 2014 to an incredible $92.2bn by 2026, as the data generated by the Internet of Things (IoT), social media and the web continues to grow.

Traditional data warehousing architectures and big data platforms, such as Hadoop, which grew out of the desire to handle large datasets are now beginning to show signs of age as the velocity, variety and volume of data rapidly inclines.

To cope with this new demand, the evolution and the innovation in cloud technologies have steadily simmered alongside the growth in data. Cloud has been a key enabler in addressing a vital customer pain-point of big data: how do we minimise the latency of data insight? Data-sharing is the answer.

The business of big data

By channelling big data through purpose-built, cloud-based data warehousing platforms, it can be shared within and between organisations, in real-time, with greater ease and help them better respond to the market.

Through cloud, organisations are able to share their data in a governed and secure manner across the enterprise, ending the segregation and latency of data insights among both internal departments and companies external third-parties.

Previously, analysing data sets was limited to the IT or business intelligence teams that sat within an organisation. But as data continues to grow and become an important ‘oil’ for enterprises, it has caused a shift in business requirements.

For data-driven companies, data is now being democratised across the entire workforce. Different sections within a business such as sales, marketing or legal will all require access to data, quickly and easily, but old big data architectures prevented this.

Instead, the advent of cloud has effectively supported concurrent users, meaning everyone from junior to board level employees can gain holistic insight to boost their understanding of both the business and its customers.

On-premise legacy solutions also struggle to process large datasets, taking days or even weeks to extract and process the data at a standard ready for organisations to view. This data would then need to be shifted to an FTP, an Amazon Web Services S3 bucket or even an email.

But the cloud has become a referenceable space, much like how the internet operates. We are easily able to gain insights by visiting a webpage, and in the same way, the cloud can also be used as a hub to quickly access this data.

Data-sharing enables collaboration

Data-sharing not only improves the speed of data insights, but can also help strengthen cross-company collaboration and communication protocols too.

Take a multinational company, for example, they are often divided into many subsections or have employees based across the globe. Yet – through cloud – data sharing can help bring these separate divisions together, without having to replicate data insights for different regions.

As data volumes increase year-on-year, we also see data sharing evolve in the process. The insatiable desire for data will result in organisations tapping into the benefits of machine learning to help sift through the mountains of information they receive.

Organisations who capitalise on machine learning will also be better positioned to extrapolate the variety of data sources available and glue it together to serve as an interconnected data network.

Most of the data pulled is from cloud-based sources and as a result, organisations can use machine learning to get 360 degree visibility of customer behaviours. This enables organisations to better tailor specific products or services to them based on their preferences. Without the cloud, this simply wouldn’t have been possible.

Walmart, the largest retailer in the world, has grown its empire by capitalising on big data and using machine learning to maximise customer experiences. By using predictive analytics, Walmart stores can anticipate demand at certain hours and determine how many employees are needed at the checkout, helping improve instore experiences.

Other retailers, such as Tesco are similarly using machine learning to better understand the buying behaviours of their customers before products are purchased to provide a more seamless e-commerce experience.

As the data-sharing economy grows, data’s reputation as the new oil will continue to surge in the years ahead. Cloud-based technologies are working hand-in-hand with big data to not only offer key insights, but also serve as a foundation for collaboration and communication between all parties within the business ecosystem.

With more and more organisations drawing on cloud and machine learning, it will only fuel the business decisions made through data, offering unparalleled opportunities to respond to customer demands faster than ever before.


February 28, 2018  4:45 PM

Price caps: Keeping public cloud costs under control

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Richard Blanford, managing director of cloud services provider Fordway, shares his top tips on what IT departments can do to curb public cloud overspend.

As the costs of public cloud services continue to fall, moving services off-premise increasingly looks like a sensible business decision. However, there is much more to running a service than server capacity, and if organisations are not careful their costs can quickly spiral out of control.

To keep public cloud costs in check, in-house IT teams need to educate themselves on cloud charging models, and develop new skills to ensure overspend is avoided.

Get the skills to pay the bills

The first key skill is to ensure you understand the characteristics and requirements of the service or application being moved to the cloud.

Software-as-a-Service (SaaS) is pretty straightforward, whereas Platform-as-a-Service (PaaS) and particularly Infrastructure-as-a-Service (PaaS) are where design expertise is needed to ensure the applications and services being moved fit with your target cloud provider.

Secondly, once migrated, your team will need the ability to decipher cloud usage reports and invoices.  As well as the known costs for servers and storage, there will be additional ones for ancillary requirements such as IP addresses, domain resilience and data transfers into, out of and between servers.

As an example, for an IaaS instance on Amazon Web Services, there are a minimum of five and potentially eight metered costs that need to be tracked for a single internet-facing server. Microsoft Azure and other public cloud providers are comparable.

The complexity increases if the organisation is hosting complex, multi-server environments.  If other elements are required to run the application, such as security, resilience, management, patching and back-up, these will appear as additional charges.

This is less of an issue with SaaS as it usually has a standard per user per month charge, but with IaaS, and to some extent PaaS, other elements are added on top.

Having worked out the origin of each cost, a number of relatively simple and quick analyses can be done to identify the main reasons for any overspend. The key is to understand the likely use of each application.

Telling apart the cloud types

All public cloud services are metered, which can be both good and bad. If the way your applications work has been factored in when designing services prior to moving them to cloud, you are more likely to avoid unpleasant surprises.

In many services there is a cost per GB each time servers in different domains talk to each other, and a second cost per GB to send data over the internet.

For example, in AWS you are charged if you use a public IP address, and because you don’t buy dedicated bandwidth there is an additional data transfer charge against each IP address – which can be a problem if you create public facing websites and encourage people to download videos.

Every time a video is played, you’ll incur a charge, which may seem insignificant on its own but will soon add up if 50,000 people download your 100MB video, for example. In some applications servers have a constant two-way dialogue so costs that initially seem small can quickly escalate.

The same issue applies with resilience and service recovery, where you will be charged for transferring data between domains.

To understand costs accurately you need to know the frequency of snapshots, how big those snapshots are and the rate of data change.

AWS and Azure charge resilience in different ways; both will keep a copy and bring it up if a host fails, but with AWS you need a different type of service and pay extra, whereas for Azure it is included as standard.

There is also an array of options available for storage, Azure has five options to choose from, and each has different dependencies, as well as a whole new set of terminology.

All these need to be understood, compared and evaluated as part of choosing a service.  If you find storage and back-up costs escalating, IT staff need to take action to prevent the situation getting worse.

The best way to avoid unexpected costs is to look closely at the different types of service available, such as on-demand, reserved or spot instances, before moving to public cloud, and match your workload and requirement to the instance type.

Reserved instances are much cheaper per hour than on-demand, but you are then tied in for a specified period, which means you are unable to move quickly should a better commercial option be introduced. If an application is not optimised for public cloud, consider retaining it in-house or use a managed cloud service with agreed, predictable costs.

Finally, the pace of change in public cloud is very rapid; new and enhanced services are frequently introduced, and not often well publicised. To get best value you need to be prepared to regularly migrate your services between different instance types, classes or families. The cloud provider will not do this for you, so you will need the skills to do it yourself or contract a third party to do for you.


February 22, 2018  9:20 AM

Cryptocurrency and colocation: Breaking down the Bitcoin mining barriers

Caroline Donnelly Profile: Caroline Donnelly
Bitcoin, Colocation

In this guest post, Jack Bedell-Pearce, managing director of service provider 4D Data Centres, shares his thoughts on the role colocation can play in helping cryptocurrency miners boost their profits.  

Despite recent fluctuations and a downturn in value, Bitcoin remains one of the most popular cryptocurrencies.

While many see the potential profits in the likes of Ethereum, Ripple, and other up and coming cryptocurrencies, many more see mining them as a risky, and potentially unprofitable venture.

As the value of cryptocurrencies has increased over time, so have the challenges associated with mining them.

That’s because all cryptocurrencies rely on blockchain: a distributed, peer-to-peer ledger technology that ensures cryptocurrency transactions are validated and secure.

Miners add new blocks to the chain using mining software to identify Secure Hash Algorithms – and in return, they receive cryptocurrency units.

Because miners are effectively competing to be the first to solve a particular Secure Hash Algorithm, budding miners can encounter challenges, because many cryptocurrencies limit how many units are in circulation at any time.

Furthermore, the mining scene for popular cryptocurrencies can be very competitive, making it difficult to get started. And, while less popular currencies may be simpler to mine, there’s no guarantee they will hold their value long-term.

Another key concern is securing enough energy, space and compute resources to power their cryptocurrency mining software in the first place.

Calculating the cost of cryptocurrency

Successful cryptocurrency mining requires a range of important assets that can potentially come at a high cost.

Firstly, all miners need hardware to power their mining applications. Some use a conventional CPU, others favour a customised graphics processor or field-gate programmable array. More recently some miners have started using pre-programmed, application-specific integrated circuits.

Whatever hardware you decide to use, you’ll need to carefully consider how it balances cost and flexibility, and how this stacks up against potential profits.

While mining hardware often has a small physical footprint, the GPUs and ASICs they contain consume vast amounts of power. And when you factor in the additional power cost of keeping the hardware cool, it’s a significant outlay that can cut deep into potential profits.

For example, the bitcoin network currently uses approximately 16TWh of electricity per year, accounting for 0.08% of the world’s energy consumption, and the energy cost of a single transaction could power five households for a day.

Because Secure Hash Algorithms must be submitted to the cryptocurrency network, it’s important for your mining operation to have a stable network connection.

Having a low-latency network connection give users the best possible chance to solve a block and mine the cryptocurrency before anyone else can.

Significant players in the mining community have also been the targets of distributed denial of service (DDoS) attacks in the past. So, if you’re planning on mining seriously, you’ll want to ensure you have a secure network with protective measures in place to keep downtime to a minimum.

Similarly, physical security should also be a key concern if you plan on mining seriously. Without a secure site for keeping your mining hardware safe, you run the risk of theft.

Combining colocation and cryptocurrency

All of these mining requirements add up to a significant investment. While the costs can be substantial, the opportunity for generating revenue is higher than ever – and it’s an opportunity that many will want to capitalise on before the mining market becomes even more saturated.

So how can you cut the costs, reduce the risks of mining, and make the most of the cryptocurrency opportunity?

Colocation can help reduce the risks and costs associated with cryptocurrency mining – and maximise the amount of profit you can make from it.

By moving your mining equipment into a shared data centre managed by a third party you can:

  • Significantly reduce power cost: datacentres are designed to handle massive energy requirements in the most efficient way possible
  • Get a stable, low-latency network:  datacentres offer enterprise-class internet with significantly higher uptimes
  • Secure  valuable mining assets: datacentres can provide a myriad of security measures, ranging from CCTV and guards, to comprehensive DDoS protection.

 


February 20, 2018  1:32 PM

Next-generation cloud: Is serverless the answer?

Caroline Donnelly Profile: Caroline Donnelly
cloud, containers

In this guest post, Naveen Kumar, vice president of innovation, enterprise software and consumer at global design and engineering company Aricent, makes the case for serverless computing.

As far as technology concepts go, containers and serverless computing are attracting fans and baffling IT departments in equal measure. And, while there is a degree of familiarity with the former, many are still trying to work out what role the latter will play within their organisation’s wider IT strategy.

Containers are an evolution of virtualisation technologies, and are abstracted from the underlying host execution environment, allowing them to be ported across other environments and multiple clouds.

Serverless, meanwhile, is an evolution of cloud computing and a bit of a misnomer. It does not mean “no servers”, but that developers no longer have to worry about underlying server and capacity management.

Developers write application functionality using serverless APIs and deploy their functionality on the underlying serverless platform, which – in turn – takes care of the provisioning and scaling up and down of resources based on usage.

The platform automatically manages services, which reduces operational overheads, and developers only pay for what they use.

The concept rocketed in popularity with the introduction of AWS Lambda in 2014, paving the way for the emergence of Azure Functions, Google Cloud Functions and Open Whisk.

Containers and serverless technologies are not mutually exclusive. There is a need for both and a “mix and match” approach can be a very efficient way of managing business and other functions.

If vendor lock-in and fine-grained control are major concerns, then containers could be the way to go. Typically, serverless has been used for limited tasks, such as running regularly scheduled functions or applications with multiple entry points in dedicated Virtual Machines (VM), where containers would be more efficient.

Making the case for serverless

Serverless does have a number of advantages over containers. It is utilised more for functional capabilities than executing business functions, and can automatically scale in response to spikes in demand and comes with a granular cost-model.

Furthermore, it brings operational simplicity and cost savings as programs can be run and terminated on call. Importantly, it makes the product life cycle more efficient and reduces overheads.

It also paves the way for cloud to truly function as a utility because it does not require any VM provisioning or upfront commitments. Enterprises only pay for what they use.

However, it comes with risks and service-level agreements (SLAs). A significant amount of operational overheads, initially managed by enterprises, now move to serverless platform providers.

This means enterprises or application developers have to monitor their apps (error tracking and logging) and SLAs, rather than the underlying infrastructure.

Overcoming the challenges

Organisations planning to ramp up their serverless deployments need to consider a number of factors. They include speed of development and release, the complexity of the solution, the SLA risks, as well as the threat of management and operational overheads and vendor lock-in.

While containers can provide better control of infrastructure and distributed architecture, with serverless you have less control of stack implementation and more focus on the business functions that can be executed.

These factors can be seen as part of an evolutionary process for cloud computing which will ultimately make it easier to use.

Already-virtualised, monolithic applications can be containerised with some effort but moving to serverless computing requires rewriting from the ground up. That is why the latter is worth considering for new applications, where time-to-market is critical.

What to consider

There are several concrete challenges any enterprise should consider when weighing up a serverless solution.

  1. It may require existing applications to be rewritten entirely – if you want to use a provider’s platform and the function on it.
  2. It could contribute towards vendor lock-in, as the application is dependent on the service provider.
  3. Tools for developing and debugging serverless applications are maturing, while standards for moving between different platforms are yet to emerge.
  4. Network latency and costs are additional technical considerations.

The efficiencies gained from serverless increasingly make it seem like an attractive option, despite the potential drawbacks in terms of set-up time, control and quality-of-service. There is no doubt serverless will have a major impact on enterprises over the next few years.

AWS, Google, IBM and Microsoft  have freemium products that allow enterprises to try before they buy, and this is fuelling interest.

Put simply, serverless computing provides one more option for enterprises to use public cloud services.  A significant number of workload jobs currently deployed on public clouds have dynamic demand characteristics.

Serverless adoption will increase as it brings developer efficiencies, performance improvements and speeds up time-to-market. Adoption will further be driven as new or existing applications are re-architected to leverage public cloud services.  Its uptake is strongly linked to overall cloud adoption, which is growing and will continue to grow over time. In short: serverless computing is here to stay.


February 15, 2018  11:05 AM

The Field of (Cloud) Dreams: if you build it, will they come?

Caroline Donnelly Profile: Caroline Donnelly
cloud, Cloud adoption, Cloud strategy

In this guest post, Allan Brearley, cloud practice lead at IT services consultancy ECS and Tesco Bank’s former head of transformation, draws inspiration from a 1989 Kevin Costner movie to advise enterprises on how best to approach their move to the cloud…

Using cloud offers enterprises opportunities to increase business agility and optimise costs, and make it possible for them to compete more effectively with challenger businesses that benefit from being born in the cloud.

However, the cloud journey can mask a significant number of challenges that must be carefully navigated before those benefits can be realised.

Firstly, their strategy needs buy-in from all business stakeholders, and sponsorship at an executive level, ideally with board representation.

In reality, cloud initiatives often start from silos within an organisation, resulting in poor engagement, and failure to realise the potential transformational benefits.

These are not just the rogue instances of shadow IT, or stealth IT, that often spring to mind. In fact, at first sight cloud adoption can often appear to be driven from what might seem very logical areas within a business.

Agility vs. cost in cloud

When there’s a particular focus on cost optimisation, adoption is usually sponsored from within existing infrastructure delivery areas. This might typically be when heads of infrastructure or CTOs identify an opportunity to consolidate on-premise datacentre infrastructure with the expectation of cost savings.

Their thinking is correct. Closing down ageing facilities will reduce capital expenditure significantly. However, any new “off-premise datacentre” must be properly planned for it to deliver sustainable benefits to the business.

From an agility perspective, the journey to the cloud may be rooted in digital or innovation areas.

However, problems often arise when attempts are made to either broaden the reach of cloud within the organisation or when steps are made to take the capability into production.

A failure to have broad organisational engagement and support from all key stakeholders in either of these cases is likely to see best intentions stall at best, and often result in overall failure.

Stakeholder engagement

Without buy-in from application owners, data owners and relevant CIO teams, the promised cost benefits associated with migrating applications off-premise rarely materialise. The danger is you end up investing significant effort to develop a new cloud-based capability but, without an overarching strategy to exploit cloud, it may increase, rather than reduce, your costs.

As in the 1989 Field of Dreams movie, starring Kevin Costner, you need to ensure that once you build it, they will come.

Successful outcomes will only be fully achieved when the cloud strategy is embraced at an organisational level, with engagement across the enterprise. Stakeholders have to buy-in to a clearly defined strategy and understand the motivation for moving applications to the cloud. This has to be driven by a vocal, suitably empowered C-level executive.

It’s not enough to have a single, passionate senior (cheer) leader. This individual also has to sit at the right level in the organisation to have the power to influence and align disparate teams to commit to the same journey. Whilst cloud adoption requires top-down support, it also requires bottom-up evangelism for success.

Indeed, the need to evangelise and sell the benefits of cloud within an enterprise shouldn’t be underestimated.  Establishing some quick wins will not only support integration from a technical and operational perspective, but provide the success stories that can help to shift the cultural needle.A focus on taking the organisation with you on the journey will contribute greatly to a successful outcome.

From disparate to joined up

If you look around your organisation and see an enthusiastic but renegade team heading off under their own steam to build their own Field of Dreams, don’t panic.

It is still possible to bring structure into the cloud adoption programme and reshape it as an organisation-wide, joined-up initiative. Start by taking a step back and bringing all the stakeholders inside the tent.

Clearly it will be more challenging to achieve retrospective buy-in but it is certainly possible to do so, providing the programme has not passed a tipping point.

Ensure you avoid disenfranchising any key stakeholder groups, because if this happens then it will be difficult to recover the situation.

Take a cue from Kevin Costner: don’t stand alone in a field of corn but act decisively to secure the right support and you too could be nominated for awards for your starring role in cloud adoption.


February 2, 2018  12:21 PM

Cloud billing: Cutting through the complexity

Caroline Donnelly Profile: Caroline Donnelly
AWS, cloud, Google, IaaS

In this guest post, Kat Lee, data analyst at service provider Cloudreach, shares some best practice on cloud billing to help enterprises cut costs.

IT cost control and management is a hot topic of debate in enterprises, seeking to drive down the cost of operational expenditure. As the cloud adoption mentality has increasingly become “not if, when”, cloud cost control has become a key budgeting topic.

The majority of enterprises nowadays don’t just choose one cloud service provider, having realised the benefits of adopting a hybrid cloud model. But this now presents its own challenges. Each service provider has a different way of presenting cost and usage reporting, which can lead to complexity and confusion for end users.

Gartner research suggests 95% of business and IT leaders find cloud billing the most confusing element of using public cloud services. This confusion leads to a lack of governance and oversight which, of course, has a financial impact to the company.

Take stock of cloud resources

To get a handle on this, enterprises must get to the point where they know the amount of cloud resources being consumed by their teams.

The first point of action is to create a customised invoicing process to gather all the data related to cloud spend and break it down to a granular level. That way each employee and department can benchmark their usage. This will help the finance team identify any increases in spend as and when they happen and to understand the cause of them.

Cloud usage and cost reporting tools are of critical importance here. These tools are vital to give clarity on usage and cost allocation and give IT finance teams more accurate information when resource planning, avoiding overspend and an angry CIO.

Taking ownership of cloud billing

Amazon Web Services (AWS) and Google Cloud Platform (GCP)’s recent per-second billing could offer customers huge savings, but to take full advantage, end users must understand and own their billing process.

To get a handle on spend, enterprises must breakdown and allocate the costs to the right business group, department, project and individual employee. Resource tagging is absolutely vital in this regard to get true cloud governance in place and understand which project is consuming what resources.

Understanding the numerous ways cloud providers offer cost savings is vital to keeping cloud spend under control too.

AWS, Azure and GCP all offer discounts for committed usage of virtual machines, in varied ways, from upfront annual payments to reserved instances and automatic discounts on long running virtual machines. AWS also offers Spot Instances – a method of purchasing compute time at a significantly reduced cost when there is spare capacity – a virtual machine marketplace powered by supply and demand.

These cost saving opportunities are increasingly complex and the most mature cloud consumers will have dedicated resources focussed on managing costs.

Where next for cloud billing?

In the next 12 months, there is going to be increased growth and maturity in the cloud market. As enterprises increasingly see the inevitability of a move to the cloud, a key differentiator between cloud service providers is going to be how they deliver and bill enterprises for computing resources.

Having transparency and ownership over the billing cycle, and an understanding of the cost optimisation options available is the next step in helping enterprise customers make the most of cloud.


January 29, 2018  10:54 AM

Serverless vs. Microservices: What you need to know for cloud

Caroline Donnelly Profile: Caroline Donnelly
cloud, Microservices

In this guest post Neil Turvin, CEO at nearshore software development company Godel Technologies, shares his thoughts on how serverless computing is gaining ground in the delivery of cloud computing

Although still in its infancy, serverless computing shares some of the characteristics of microservices, but is very different in the way it delivers cloud computing. There are pros and cons to both, but serverless is becoming an increasingly attractive option for a number of reasons.

Pricing is the first differentiator. Serverless is pay-as-you-go, making it an attractive option for applications with infrequent requests or for startup organisations. It also reduces operational costs as infrastructure and virtual machines are handled by a service provider rather than directly on-premise, IaaS or PaaS.

Scalability is also outstanding on serverless architecture – by its nature it can handle spike loads more easily and can be automatically managed quickly and transparently. The only caveat is the maximum number of requests that can be processed making it unsuitable for high load systems.

That brings me to architectural complexities – which can be a mixed bag.

Serverless is even more granular than microservices, and provides a much higher degree of functionality. On the flip-side, that also creates much more complexity. It also has a number of other restrictions, such as a limited number of requests and operation duration, and supports fewer programming languages.

Its components are also often cloud provider specific, which can be problematic to change. Stateless by design, it must manage states strictly outside functions – meaning no more in-memory cache.

Furthermore, serverless functions can also act as scheduled jobs, event handlers etc, and not just as services.

Sorting through the serverless complexity

The level of granularity you get with serverless can also affect tooling and frameworks. This is because the higher the granularity, the more complicated integration testing becomes, making it more difficult to debug, troubleshoot and test.

Microservices by comparison is a mature approach and is well supported by tools and processes.

Time to market also comes into play. Due to the lightweight programming model and operational ease of a serverless architecture, time to market is greatly reduced for new features, which is a key driver for many businesses.

It also means prototypes can be quickly created for Internet of Things (IoT) solutions using Function as a Service (FaaS) for data processing to demonstrate a new technology to investors or clients.

Although microservices still provides a solid approach to service oriented architecture (SOA), serverless is gaining ground in event-based architecture and clearly has advantages in terms of reduced time to market, flexible pricing and reduced operational costs.

It’s unlikely for now that serverless will or should be the approach for every system – but watch this space as it matures. For now the best solution is a combination of both architectural approaches to help deliver and take advantage of the benefits the cloud brings to you and your customers.


January 24, 2018  3:48 PM

Unblocking the database bottleneck in enterprise DevOps deployments

Caroline Donnelly Profile: Caroline Donnelly
Automation, cloud, Database, DBA, DevOps

In this guest post, DevOps consultant and researcher Nicole Forsgren, PhD, shares her advice on what enterprises can do to overcome the database problem when scaling up their DevOps endeavours.  

High performing DevOps teams that have transformed their software delivery processes can deploy code faster, more reliably, and with higher quality than their low-performing peers.

They do this by tackling technical challenges with effective and efficient automation, adopting processes drawn from the lean and agile cannon, and fostering a culture that prioritises empathy and information flow.

A key to these successful transformations is fast feedback loops, and integrating key stakeholders early in the development and delivery process.

In the earliest iterations of DevOps, the first key stakeholder was IT operations: taking feedback about maintainability and scalability of your code. You could frame it this way: learn why IT operations so often put up barriers to code deploys, address that feedback sooner in the pipeline, and continue to work together.

Once dev and ops are humming along smoothly, many teams find they still hit bottlenecks in their deployments, particularly as they scale. And the trend I’m hearing more and more often is that this bottleneck is happening at the database.

Unfortunately this is not a challenge you can turn a blind-eye to. It’s not going away anytime soon and the tooling you are using to automate the build and deployment of your apps will not solve this problem.

No matter how fast you can get your application releases going, it’s more than likely they’ll be waiting for changes to the database before it can get shipped. There’s no time like the present to bring data into DevOps as the next move toward shifting left.

The devil’s in the database

Teams and organisations are humming along and delivering code. Things start to get more exciting as applications start to scale. Under the covers, however, the database is a shared resource across dev, test, and production.

In addition, the database release process is usually manual, making it a carefully orchestrated process that is slow, risky, and prone to errors. The database administrators (DBAs) guard this process carefully, and with good reason.

Early on, performance is acceptable, but as your application continues to grow and scale, this database release process gets more difficult. Or perhaps your application is already at scale, and you are shipping code faster, when suddenly you find the application code outpacing your database schema changes. “Brute force” only works so long before your DBAs are burning out and just can’t keep up.

When faced with more requests for changes to the database than are feasible to do or safe to do, the DBAs by protecting their resource and saying “no.” Suddenly, database changes can take weeks, when the competing software releases are using continuous delivery practices and pushing to production daily or even hourly.

Addressing the issue

Solving the database constraint in DevOps takes a few forms, and includes culture and tools.

Let’s start with culture. You’ll want to start shifting your database work upstream into the development phase and find problems before they get into production. This is similar to the “shift left” emphasis we’ve seen in other critical areas that are often left to the end of the delivery pipeline, like security.

To truly shift data left allow your engineers to follow the same process they use today for the app. Check your change scripts into source control, run automated builds, get notified when database changes ‘break the build’ and provide your engineers with the tooling they need to find and fix database problems long before they become an issue in production.

As a database professional, give your engineering teams the tools they need to treat database code just like app code. This will equip them to know when the database is the problem, and teach them where and how to look for it. This work is important and can be shared upstream. These practices elevate DBAs to valuable consultants, not downstream constraints.

This may feel foreign at first, as the role of the DBA expands from building tools to collaborating and consulting across whole teams of people.

Where they were once protective and working on technology alone or with only other DBAs, they are now open and consultative with others. Where their work with the database was once the last stop in the chain of application release automation, this work is (hopefully) triggered much earlier in the pipeline.

Done right, this approach frees the DBA team from reviewing every change in every change script, so they can spend their valuable time on more important tasks such as; patching and upgrading, performance tuning, data security, and capacity planning.

Expanding the reach of a DBA

Engineering teams usually start with just a few DBAs, so you have to scale the person — and this is done with technology and automation.

This helps teams and organisations work faster, protect their data, and increase productivity. Just as with any other part of the DevOps process, automation speeds up our work: As the application and the databases start to scale, DBAs will find themselves needing to scale their services.

Deploying 100 database servers is much faster (and more effective) with scripts compared to doing it manually.

Rolling database schema changes is a high-risk move; using tooling and technology helps mitigate this risk in two ways: First, by introducing and testing these changes faster in the application development pipeline, you discover errors sooner, allowing teams to find and fix changes before they hit production.

Second, using automation provides traceable steps and verification if any steps need to be repeated or reversed. Third, automation increases productivity by creating repeatable processes that allow you to manage production and database schema migration.

When you’re developing and delivering code with speed and stability, but you start to hit a database constraint, don’t panic. Just apply the same DevOps principles you used for software: focus on bridging culture with your database team, improving and streamlining process with your database team, and leveraging a smart investment in tooling and automation.

Author acknowledgements

Many thanks to Silvia Botros and Darren Pedroza for sharing their thoughts and experiences with databases in DevOps transformations. I would also like to thank Camille Fournier for reading an early draft of this post.

Also, for more detail, I suggest you check out Database Reliability Engineering, by Laine Campbell and Charity Majors. It is essential reading for those digging into databases and database administration in DevOps today.

 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: