Ahead in the Clouds


March 19, 2018  12:05 PM

Data-sharing and cloud: A big data match made in heaven

Caroline Donnelly Profile: Caroline Donnelly
Big Data, cloud, Collaboration, Data sharing

In this guest post, Thibaut Ceyrolle, vice president for Europe, Middle-East and Africa (EMEA) at data warehousing startup Snowflake Computing, makes the business case for using cloud to boost data-sharing within enterprises and beyond.

The phrase, ‘data is the new oil’, continues to hold true for many organisations, as the promise of big data gives way to reality, paving the way for companies to glean valuable insights about their products, services, and customers.

According to Wikibon, the global big data market will grow from $18.3bn in 2014 to an incredible $92.2bn by 2026, as the data generated by the Internet of Things (IoT), social media and the web continues to grow.

Traditional data warehousing architectures and big data platforms, such as Hadoop, which grew out of the desire to handle large datasets are now beginning to show signs of age as the velocity, variety and volume of data rapidly inclines.

To cope with this new demand, the evolution and the innovation in cloud technologies have steadily simmered alongside the growth in data. Cloud has been a key enabler in addressing a vital customer pain-point of big data: how do we minimise the latency of data insight? Data-sharing is the answer.

The business of big data

By channelling big data through purpose-built, cloud-based data warehousing platforms, it can be shared within and between organisations, in real-time, with greater ease and help them better respond to the market.

Through cloud, organisations are able to share their data in a governed and secure manner across the enterprise, ending the segregation and latency of data insights among both internal departments and companies external third-parties.

Previously, analysing data sets was limited to the IT or business intelligence teams that sat within an organisation. But as data continues to grow and become an important ‘oil’ for enterprises, it has caused a shift in business requirements.

For data-driven companies, data is now being democratised across the entire workforce. Different sections within a business such as sales, marketing or legal will all require access to data, quickly and easily, but old big data architectures prevented this.

Instead, the advent of cloud has effectively supported concurrent users, meaning everyone from junior to board level employees can gain holistic insight to boost their understanding of both the business and its customers.

On-premise legacy solutions also struggle to process large datasets, taking days or even weeks to extract and process the data at a standard ready for organisations to view. This data would then need to be shifted to an FTP, an Amazon Web Services S3 bucket or even an email.

But the cloud has become a referenceable space, much like how the internet operates. We are easily able to gain insights by visiting a webpage, and in the same way, the cloud can also be used as a hub to quickly access this data.

Data-sharing enables collaboration

Data-sharing not only improves the speed of data insights, but can also help strengthen cross-company collaboration and communication protocols too.

Take a multinational company, for example, they are often divided into many subsections or have employees based across the globe. Yet – through cloud – data sharing can help bring these separate divisions together, without having to replicate data insights for different regions.

As data volumes increase year-on-year, we also see data sharing evolve in the process. The insatiable desire for data will result in organisations tapping into the benefits of machine learning to help sift through the mountains of information they receive.

Organisations who capitalise on machine learning will also be better positioned to extrapolate the variety of data sources available and glue it together to serve as an interconnected data network.

Most of the data pulled is from cloud-based sources and as a result, organisations can use machine learning to get 360 degree visibility of customer behaviours. This enables organisations to better tailor specific products or services to them based on their preferences. Without the cloud, this simply wouldn’t have been possible.

Walmart, the largest retailer in the world, has grown its empire by capitalising on big data and using machine learning to maximise customer experiences. By using predictive analytics, Walmart stores can anticipate demand at certain hours and determine how many employees are needed at the checkout, helping improve instore experiences.

Other retailers, such as Tesco are similarly using machine learning to better understand the buying behaviours of their customers before products are purchased to provide a more seamless e-commerce experience.

As the data-sharing economy grows, data’s reputation as the new oil will continue to surge in the years ahead. Cloud-based technologies are working hand-in-hand with big data to not only offer key insights, but also serve as a foundation for collaboration and communication between all parties within the business ecosystem.

With more and more organisations drawing on cloud and machine learning, it will only fuel the business decisions made through data, offering unparalleled opportunities to respond to customer demands faster than ever before.

February 28, 2018  4:45 PM

Price caps: Keeping public cloud costs under control

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Richard Blanford, managing director of cloud services provider Fordway, shares his top tips on what IT departments can do to curb public cloud overspend.

As the costs of public cloud services continue to fall, moving services off-premise increasingly looks like a sensible business decision. However, there is much more to running a service than server capacity, and if organisations are not careful their costs can quickly spiral out of control.

To keep public cloud costs in check, in-house IT teams need to educate themselves on cloud charging models, and develop new skills to ensure overspend is avoided.

Get the skills to pay the bills

The first key skill is to ensure you understand the characteristics and requirements of the service or application being moved to the cloud.

Software-as-a-Service (SaaS) is pretty straightforward, whereas Platform-as-a-Service (PaaS) and particularly Infrastructure-as-a-Service (PaaS) are where design expertise is needed to ensure the applications and services being moved fit with your target cloud provider.

Secondly, once migrated, your team will need the ability to decipher cloud usage reports and invoices.  As well as the known costs for servers and storage, there will be additional ones for ancillary requirements such as IP addresses, domain resilience and data transfers into, out of and between servers.

As an example, for an IaaS instance on Amazon Web Services, there are a minimum of five and potentially eight metered costs that need to be tracked for a single internet-facing server. Microsoft Azure and other public cloud providers are comparable.

The complexity increases if the organisation is hosting complex, multi-server environments.  If other elements are required to run the application, such as security, resilience, management, patching and back-up, these will appear as additional charges.

This is less of an issue with SaaS as it usually has a standard per user per month charge, but with IaaS, and to some extent PaaS, other elements are added on top.

Having worked out the origin of each cost, a number of relatively simple and quick analyses can be done to identify the main reasons for any overspend. The key is to understand the likely use of each application.

Telling apart the cloud types

All public cloud services are metered, which can be both good and bad. If the way your applications work has been factored in when designing services prior to moving them to cloud, you are more likely to avoid unpleasant surprises.

In many services there is a cost per GB each time servers in different domains talk to each other, and a second cost per GB to send data over the internet.

For example, in AWS you are charged if you use a public IP address, and because you don’t buy dedicated bandwidth there is an additional data transfer charge against each IP address – which can be a problem if you create public facing websites and encourage people to download videos.

Every time a video is played, you’ll incur a charge, which may seem insignificant on its own but will soon add up if 50,000 people download your 100MB video, for example. In some applications servers have a constant two-way dialogue so costs that initially seem small can quickly escalate.

The same issue applies with resilience and service recovery, where you will be charged for transferring data between domains.

To understand costs accurately you need to know the frequency of snapshots, how big those snapshots are and the rate of data change.

AWS and Azure charge resilience in different ways; both will keep a copy and bring it up if a host fails, but with AWS you need a different type of service and pay extra, whereas for Azure it is included as standard.

There is also an array of options available for storage, Azure has five options to choose from, and each has different dependencies, as well as a whole new set of terminology.

All these need to be understood, compared and evaluated as part of choosing a service.  If you find storage and back-up costs escalating, IT staff need to take action to prevent the situation getting worse.

The best way to avoid unexpected costs is to look closely at the different types of service available, such as on-demand, reserved or spot instances, before moving to public cloud, and match your workload and requirement to the instance type.

Reserved instances are much cheaper per hour than on-demand, but you are then tied in for a specified period, which means you are unable to move quickly should a better commercial option be introduced. If an application is not optimised for public cloud, consider retaining it in-house or use a managed cloud service with agreed, predictable costs.

Finally, the pace of change in public cloud is very rapid; new and enhanced services are frequently introduced, and not often well publicised. To get best value you need to be prepared to regularly migrate your services between different instance types, classes or families. The cloud provider will not do this for you, so you will need the skills to do it yourself or contract a third party to do for you.


February 22, 2018  9:20 AM

Cryptocurrency and colocation: Breaking down the Bitcoin mining barriers

Caroline Donnelly Profile: Caroline Donnelly
Bitcoin, Colocation

In this guest post, Jack Bedell-Pearce, managing director of service provider 4D Data Centres, shares his thoughts on the role colocation can play in helping cryptocurrency miners boost their profits.  

Despite recent fluctuations and a downturn in value, Bitcoin remains one of the most popular cryptocurrencies.

While many see the potential profits in the likes of Ethereum, Ripple, and other up and coming cryptocurrencies, many more see mining them as a risky, and potentially unprofitable venture.

As the value of cryptocurrencies has increased over time, so have the challenges associated with mining them.

That’s because all cryptocurrencies rely on blockchain: a distributed, peer-to-peer ledger technology that ensures cryptocurrency transactions are validated and secure.

Miners add new blocks to the chain using mining software to identify Secure Hash Algorithms – and in return, they receive cryptocurrency units.

Because miners are effectively competing to be the first to solve a particular Secure Hash Algorithm, budding miners can encounter challenges, because many cryptocurrencies limit how many units are in circulation at any time.

Furthermore, the mining scene for popular cryptocurrencies can be very competitive, making it difficult to get started. And, while less popular currencies may be simpler to mine, there’s no guarantee they will hold their value long-term.

Another key concern is securing enough energy, space and compute resources to power their cryptocurrency mining software in the first place.

Calculating the cost of cryptocurrency

Successful cryptocurrency mining requires a range of important assets that can potentially come at a high cost.

Firstly, all miners need hardware to power their mining applications. Some use a conventional CPU, others favour a customised graphics processor or field-gate programmable array. More recently some miners have started using pre-programmed, application-specific integrated circuits.

Whatever hardware you decide to use, you’ll need to carefully consider how it balances cost and flexibility, and how this stacks up against potential profits.

While mining hardware often has a small physical footprint, the GPUs and ASICs they contain consume vast amounts of power. And when you factor in the additional power cost of keeping the hardware cool, it’s a significant outlay that can cut deep into potential profits.

For example, the bitcoin network currently uses approximately 16TWh of electricity per year, accounting for 0.08% of the world’s energy consumption, and the energy cost of a single transaction could power five households for a day.

Because Secure Hash Algorithms must be submitted to the cryptocurrency network, it’s important for your mining operation to have a stable network connection.

Having a low-latency network connection give users the best possible chance to solve a block and mine the cryptocurrency before anyone else can.

Significant players in the mining community have also been the targets of distributed denial of service (DDoS) attacks in the past. So, if you’re planning on mining seriously, you’ll want to ensure you have a secure network with protective measures in place to keep downtime to a minimum.

Similarly, physical security should also be a key concern if you plan on mining seriously. Without a secure site for keeping your mining hardware safe, you run the risk of theft.

Combining colocation and cryptocurrency

All of these mining requirements add up to a significant investment. While the costs can be substantial, the opportunity for generating revenue is higher than ever – and it’s an opportunity that many will want to capitalise on before the mining market becomes even more saturated.

So how can you cut the costs, reduce the risks of mining, and make the most of the cryptocurrency opportunity?

Colocation can help reduce the risks and costs associated with cryptocurrency mining – and maximise the amount of profit you can make from it.

By moving your mining equipment into a shared data centre managed by a third party you can:

  • Significantly reduce power cost: datacentres are designed to handle massive energy requirements in the most efficient way possible
  • Get a stable, low-latency network:  datacentres offer enterprise-class internet with significantly higher uptimes
  • Secure  valuable mining assets: datacentres can provide a myriad of security measures, ranging from CCTV and guards, to comprehensive DDoS protection.

 


February 20, 2018  1:32 PM

Next-generation cloud: Is serverless the answer?

Caroline Donnelly Profile: Caroline Donnelly
cloud, containers

In this guest post, Naveen Kumar, vice president of innovation, enterprise software and consumer at global design and engineering company Aricent, makes the case for serverless computing.

As far as technology concepts go, containers and serverless computing are attracting fans and baffling IT departments in equal measure. And, while there is a degree of familiarity with the former, many are still trying to work out what role the latter will play within their organisation’s wider IT strategy.

Containers are an evolution of virtualisation technologies, and are abstracted from the underlying host execution environment, allowing them to be ported across other environments and multiple clouds.

Serverless, meanwhile, is an evolution of cloud computing and a bit of a misnomer. It does not mean “no servers”, but that developers no longer have to worry about underlying server and capacity management.

Developers write application functionality using serverless APIs and deploy their functionality on the underlying serverless platform, which – in turn – takes care of the provisioning and scaling up and down of resources based on usage.

The platform automatically manages services, which reduces operational overheads, and developers only pay for what they use.

The concept rocketed in popularity with the introduction of AWS Lambda in 2014, paving the way for the emergence of Azure Functions, Google Cloud Functions and Open Whisk.

Containers and serverless technologies are not mutually exclusive. There is a need for both and a “mix and match” approach can be a very efficient way of managing business and other functions.

If vendor lock-in and fine-grained control are major concerns, then containers could be the way to go. Typically, serverless has been used for limited tasks, such as running regularly scheduled functions or applications with multiple entry points in dedicated Virtual Machines (VM), where containers would be more efficient.

Making the case for serverless

Serverless does have a number of advantages over containers. It is utilised more for functional capabilities than executing business functions, and can automatically scale in response to spikes in demand and comes with a granular cost-model.

Furthermore, it brings operational simplicity and cost savings as programs can be run and terminated on call. Importantly, it makes the product life cycle more efficient and reduces overheads.

It also paves the way for cloud to truly function as a utility because it does not require any VM provisioning or upfront commitments. Enterprises only pay for what they use.

However, it comes with risks and service-level agreements (SLAs). A significant amount of operational overheads, initially managed by enterprises, now move to serverless platform providers.

This means enterprises or application developers have to monitor their apps (error tracking and logging) and SLAs, rather than the underlying infrastructure.

Overcoming the challenges

Organisations planning to ramp up their serverless deployments need to consider a number of factors. They include speed of development and release, the complexity of the solution, the SLA risks, as well as the threat of management and operational overheads and vendor lock-in.

While containers can provide better control of infrastructure and distributed architecture, with serverless you have less control of stack implementation and more focus on the business functions that can be executed.

These factors can be seen as part of an evolutionary process for cloud computing which will ultimately make it easier to use.

Already-virtualised, monolithic applications can be containerised with some effort but moving to serverless computing requires rewriting from the ground up. That is why the latter is worth considering for new applications, where time-to-market is critical.

What to consider

There are several concrete challenges any enterprise should consider when weighing up a serverless solution.

  1. It may require existing applications to be rewritten entirely – if you want to use a provider’s platform and the function on it.
  2. It could contribute towards vendor lock-in, as the application is dependent on the service provider.
  3. Tools for developing and debugging serverless applications are maturing, while standards for moving between different platforms are yet to emerge.
  4. Network latency and costs are additional technical considerations.

The efficiencies gained from serverless increasingly make it seem like an attractive option, despite the potential drawbacks in terms of set-up time, control and quality-of-service. There is no doubt serverless will have a major impact on enterprises over the next few years.

AWS, Google, IBM and Microsoft  have freemium products that allow enterprises to try before they buy, and this is fuelling interest.

Put simply, serverless computing provides one more option for enterprises to use public cloud services.  A significant number of workload jobs currently deployed on public clouds have dynamic demand characteristics.

Serverless adoption will increase as it brings developer efficiencies, performance improvements and speeds up time-to-market. Adoption will further be driven as new or existing applications are re-architected to leverage public cloud services.  Its uptake is strongly linked to overall cloud adoption, which is growing and will continue to grow over time. In short: serverless computing is here to stay.


February 15, 2018  11:05 AM

The Field of (Cloud) Dreams: if you build it, will they come?

Caroline Donnelly Profile: Caroline Donnelly
cloud, Cloud adoption, Cloud strategy

In this guest post, Allan Brearley, cloud practice lead at IT services consultancy ECS and Tesco Bank’s former head of transformation, draws inspiration from a 1989 Kevin Costner movie to advise enterprises on how best to approach their move to the cloud…

Using cloud offers enterprises opportunities to increase business agility and optimise costs, and make it possible for them to compete more effectively with challenger businesses that benefit from being born in the cloud.

However, the cloud journey can mask a significant number of challenges that must be carefully navigated before those benefits can be realised.

Firstly, their strategy needs buy-in from all business stakeholders, and sponsorship at an executive level, ideally with board representation.

In reality, cloud initiatives often start from silos within an organisation, resulting in poor engagement, and failure to realise the potential transformational benefits.

These are not just the rogue instances of shadow IT, or stealth IT, that often spring to mind. In fact, at first sight cloud adoption can often appear to be driven from what might seem very logical areas within a business.

Agility vs. cost in cloud

When there’s a particular focus on cost optimisation, adoption is usually sponsored from within existing infrastructure delivery areas. This might typically be when heads of infrastructure or CTOs identify an opportunity to consolidate on-premise datacentre infrastructure with the expectation of cost savings.

Their thinking is correct. Closing down ageing facilities will reduce capital expenditure significantly. However, any new “off-premise datacentre” must be properly planned for it to deliver sustainable benefits to the business.

From an agility perspective, the journey to the cloud may be rooted in digital or innovation areas.

However, problems often arise when attempts are made to either broaden the reach of cloud within the organisation or when steps are made to take the capability into production.

A failure to have broad organisational engagement and support from all key stakeholders in either of these cases is likely to see best intentions stall at best, and often result in overall failure.

Stakeholder engagement

Without buy-in from application owners, data owners and relevant CIO teams, the promised cost benefits associated with migrating applications off-premise rarely materialise. The danger is you end up investing significant effort to develop a new cloud-based capability but, without an overarching strategy to exploit cloud, it may increase, rather than reduce, your costs.

As in the 1989 Field of Dreams movie, starring Kevin Costner, you need to ensure that once you build it, they will come.

Successful outcomes will only be fully achieved when the cloud strategy is embraced at an organisational level, with engagement across the enterprise. Stakeholders have to buy-in to a clearly defined strategy and understand the motivation for moving applications to the cloud. This has to be driven by a vocal, suitably empowered C-level executive.

It’s not enough to have a single, passionate senior (cheer) leader. This individual also has to sit at the right level in the organisation to have the power to influence and align disparate teams to commit to the same journey. Whilst cloud adoption requires top-down support, it also requires bottom-up evangelism for success.

Indeed, the need to evangelise and sell the benefits of cloud within an enterprise shouldn’t be underestimated.  Establishing some quick wins will not only support integration from a technical and operational perspective, but provide the success stories that can help to shift the cultural needle.A focus on taking the organisation with you on the journey will contribute greatly to a successful outcome.

From disparate to joined up

If you look around your organisation and see an enthusiastic but renegade team heading off under their own steam to build their own Field of Dreams, don’t panic.

It is still possible to bring structure into the cloud adoption programme and reshape it as an organisation-wide, joined-up initiative. Start by taking a step back and bringing all the stakeholders inside the tent.

Clearly it will be more challenging to achieve retrospective buy-in but it is certainly possible to do so, providing the programme has not passed a tipping point.

Ensure you avoid disenfranchising any key stakeholder groups, because if this happens then it will be difficult to recover the situation.

Take a cue from Kevin Costner: don’t stand alone in a field of corn but act decisively to secure the right support and you too could be nominated for awards for your starring role in cloud adoption.


February 2, 2018  12:21 PM

Cloud billing: Cutting through the complexity

Caroline Donnelly Profile: Caroline Donnelly
AWS, cloud, Google, IaaS

In this guest post, Kat Lee, data analyst at service provider Cloudreach, shares some best practice on cloud billing to help enterprises cut costs.

IT cost control and management is a hot topic of debate in enterprises, seeking to drive down the cost of operational expenditure. As the cloud adoption mentality has increasingly become “not if, when”, cloud cost control has become a key budgeting topic.

The majority of enterprises nowadays don’t just choose one cloud service provider, having realised the benefits of adopting a hybrid cloud model. But this now presents its own challenges. Each service provider has a different way of presenting cost and usage reporting, which can lead to complexity and confusion for end users.

Gartner research suggests 95% of business and IT leaders find cloud billing the most confusing element of using public cloud services. This confusion leads to a lack of governance and oversight which, of course, has a financial impact to the company.

Take stock of cloud resources

To get a handle on this, enterprises must get to the point where they know the amount of cloud resources being consumed by their teams.

The first point of action is to create a customised invoicing process to gather all the data related to cloud spend and break it down to a granular level. That way each employee and department can benchmark their usage. This will help the finance team identify any increases in spend as and when they happen and to understand the cause of them.

Cloud usage and cost reporting tools are of critical importance here. These tools are vital to give clarity on usage and cost allocation and give IT finance teams more accurate information when resource planning, avoiding overspend and an angry CIO.

Taking ownership of cloud billing

Amazon Web Services (AWS) and Google Cloud Platform (GCP)’s recent per-second billing could offer customers huge savings, but to take full advantage, end users must understand and own their billing process.

To get a handle on spend, enterprises must breakdown and allocate the costs to the right business group, department, project and individual employee. Resource tagging is absolutely vital in this regard to get true cloud governance in place and understand which project is consuming what resources.

Understanding the numerous ways cloud providers offer cost savings is vital to keeping cloud spend under control too.

AWS, Azure and GCP all offer discounts for committed usage of virtual machines, in varied ways, from upfront annual payments to reserved instances and automatic discounts on long running virtual machines. AWS also offers Spot Instances – a method of purchasing compute time at a significantly reduced cost when there is spare capacity – a virtual machine marketplace powered by supply and demand.

These cost saving opportunities are increasingly complex and the most mature cloud consumers will have dedicated resources focussed on managing costs.

Where next for cloud billing?

In the next 12 months, there is going to be increased growth and maturity in the cloud market. As enterprises increasingly see the inevitability of a move to the cloud, a key differentiator between cloud service providers is going to be how they deliver and bill enterprises for computing resources.

Having transparency and ownership over the billing cycle, and an understanding of the cost optimisation options available is the next step in helping enterprise customers make the most of cloud.


January 29, 2018  10:54 AM

Serverless vs. Microservices: What you need to know for cloud

Caroline Donnelly Profile: Caroline Donnelly
cloud, Microservices

In this guest post Neil Turvin, CEO at nearshore software development company Godel Technologies, shares his thoughts on how serverless computing is gaining ground in the delivery of cloud computing

Although still in its infancy, serverless computing shares some of the characteristics of microservices, but is very different in the way it delivers cloud computing. There are pros and cons to both, but serverless is becoming an increasingly attractive option for a number of reasons.

Pricing is the first differentiator. Serverless is pay-as-you-go, making it an attractive option for applications with infrequent requests or for startup organisations. It also reduces operational costs as infrastructure and virtual machines are handled by a service provider rather than directly on-premise, IaaS or PaaS.

Scalability is also outstanding on serverless architecture – by its nature it can handle spike loads more easily and can be automatically managed quickly and transparently. The only caveat is the maximum number of requests that can be processed making it unsuitable for high load systems.

That brings me to architectural complexities – which can be a mixed bag.

Serverless is even more granular than microservices, and provides a much higher degree of functionality. On the flip-side, that also creates much more complexity. It also has a number of other restrictions, such as a limited number of requests and operation duration, and supports fewer programming languages.

Its components are also often cloud provider specific, which can be problematic to change. Stateless by design, it must manage states strictly outside functions – meaning no more in-memory cache.

Furthermore, serverless functions can also act as scheduled jobs, event handlers etc, and not just as services.

Sorting through the serverless complexity

The level of granularity you get with serverless can also affect tooling and frameworks. This is because the higher the granularity, the more complicated integration testing becomes, making it more difficult to debug, troubleshoot and test.

Microservices by comparison is a mature approach and is well supported by tools and processes.

Time to market also comes into play. Due to the lightweight programming model and operational ease of a serverless architecture, time to market is greatly reduced for new features, which is a key driver for many businesses.

It also means prototypes can be quickly created for Internet of Things (IoT) solutions using Function as a Service (FaaS) for data processing to demonstrate a new technology to investors or clients.

Although microservices still provides a solid approach to service oriented architecture (SOA), serverless is gaining ground in event-based architecture and clearly has advantages in terms of reduced time to market, flexible pricing and reduced operational costs.

It’s unlikely for now that serverless will or should be the approach for every system – but watch this space as it matures. For now the best solution is a combination of both architectural approaches to help deliver and take advantage of the benefits the cloud brings to you and your customers.


January 24, 2018  3:48 PM

Unblocking the database bottleneck in enterprise DevOps deployments

Caroline Donnelly Profile: Caroline Donnelly
Automation, cloud, Database, DBA, DevOps

In this guest post, DevOps consultant and researcher Nicole Forsgren, PhD, shares her advice on what enterprises can do to overcome the database problem when scaling up their DevOps endeavours.  

High performing DevOps teams that have transformed their software delivery processes can deploy code faster, more reliably, and with higher quality than their low-performing peers.

They do this by tackling technical challenges with effective and efficient automation, adopting processes drawn from the lean and agile cannon, and fostering a culture that prioritises empathy and information flow.

A key to these successful transformations is fast feedback loops, and integrating key stakeholders early in the development and delivery process.

In the earliest iterations of DevOps, the first key stakeholder was IT operations: taking feedback about maintainability and scalability of your code. You could frame it this way: learn why IT operations so often put up barriers to code deploys, address that feedback sooner in the pipeline, and continue to work together.

Once dev and ops are humming along smoothly, many teams find they still hit bottlenecks in their deployments, particularly as they scale. And the trend I’m hearing more and more often is that this bottleneck is happening at the database.

Unfortunately this is not a challenge you can turn a blind-eye to. It’s not going away anytime soon and the tooling you are using to automate the build and deployment of your apps will not solve this problem.

No matter how fast you can get your application releases going, it’s more than likely they’ll be waiting for changes to the database before it can get shipped. There’s no time like the present to bring data into DevOps as the next move toward shifting left.

The devil’s in the database

Teams and organisations are humming along and delivering code. Things start to get more exciting as applications start to scale. Under the covers, however, the database is a shared resource across dev, test, and production.

In addition, the database release process is usually manual, making it a carefully orchestrated process that is slow, risky, and prone to errors. The database administrators (DBAs) guard this process carefully, and with good reason.

Early on, performance is acceptable, but as your application continues to grow and scale, this database release process gets more difficult. Or perhaps your application is already at scale, and you are shipping code faster, when suddenly you find the application code outpacing your database schema changes. “Brute force” only works so long before your DBAs are burning out and just can’t keep up.

When faced with more requests for changes to the database than are feasible to do or safe to do, the DBAs by protecting their resource and saying “no.” Suddenly, database changes can take weeks, when the competing software releases are using continuous delivery practices and pushing to production daily or even hourly.

Addressing the issue

Solving the database constraint in DevOps takes a few forms, and includes culture and tools.

Let’s start with culture. You’ll want to start shifting your database work upstream into the development phase and find problems before they get into production. This is similar to the “shift left” emphasis we’ve seen in other critical areas that are often left to the end of the delivery pipeline, like security.

To truly shift data left allow your engineers to follow the same process they use today for the app. Check your change scripts into source control, run automated builds, get notified when database changes ‘break the build’ and provide your engineers with the tooling they need to find and fix database problems long before they become an issue in production.

As a database professional, give your engineering teams the tools they need to treat database code just like app code. This will equip them to know when the database is the problem, and teach them where and how to look for it. This work is important and can be shared upstream. These practices elevate DBAs to valuable consultants, not downstream constraints.

This may feel foreign at first, as the role of the DBA expands from building tools to collaborating and consulting across whole teams of people.

Where they were once protective and working on technology alone or with only other DBAs, they are now open and consultative with others. Where their work with the database was once the last stop in the chain of application release automation, this work is (hopefully) triggered much earlier in the pipeline.

Done right, this approach frees the DBA team from reviewing every change in every change script, so they can spend their valuable time on more important tasks such as; patching and upgrading, performance tuning, data security, and capacity planning.

Expanding the reach of a DBA

Engineering teams usually start with just a few DBAs, so you have to scale the person — and this is done with technology and automation.

This helps teams and organisations work faster, protect their data, and increase productivity. Just as with any other part of the DevOps process, automation speeds up our work: As the application and the databases start to scale, DBAs will find themselves needing to scale their services.

Deploying 100 database servers is much faster (and more effective) with scripts compared to doing it manually.

Rolling database schema changes is a high-risk move; using tooling and technology helps mitigate this risk in two ways: First, by introducing and testing these changes faster in the application development pipeline, you discover errors sooner, allowing teams to find and fix changes before they hit production.

Second, using automation provides traceable steps and verification if any steps need to be repeated or reversed. Third, automation increases productivity by creating repeatable processes that allow you to manage production and database schema migration.

When you’re developing and delivering code with speed and stability, but you start to hit a database constraint, don’t panic. Just apply the same DevOps principles you used for software: focus on bridging culture with your database team, improving and streamlining process with your database team, and leveraging a smart investment in tooling and automation.

Author acknowledgements

Many thanks to Silvia Botros and Darren Pedroza for sharing their thoughts and experiences with databases in DevOps transformations. I would also like to thank Camille Fournier for reading an early draft of this post.

Also, for more detail, I suggest you check out Database Reliability Engineering, by Laine Campbell and Charity Majors. It is essential reading for those digging into databases and database administration in DevOps today.

 


January 17, 2018  4:07 PM

Cloud anarchy in the UK: Here’s how to beat it

Caroline Donnelly Profile: Caroline Donnelly
cloud, Cloud adoption, Shadow IT

In this guest post, Allan Brearley, cloud practice lead at IT services consultancy ECS and Tesco Bank’s former head of transformation shares his thoughts on what enterprises can do to address the anarchy in their cloud deployments

Just over 40 years ago the Sex Pistols released their first single, Anarchy in the UK. Today we are experiencing anarchy of a different kind in some of the UK’s largest businesses.

Putting production workloads into the cloud has, on the face of it, never seemed so easy. The accessibility and ease of consumption of cloud-based services has unleashed a rush to the cloud by many large enterprises looking to take on their nimbler ‘born in the cloud’ competitors.

But there is growing evidence that many enterprises are struggling with cloud anarchy because they are not fully in control of their journey off-premise. And, without a comprehensive roadmap in place for enterprise-wide cloud adoption, cloud chaos ensues.

Shadow IT-induced anarchy

In a cloud equivalent of the Wild West, shadow IT is a major cause of cloud anarchy facing enterprises today.

It’s not unusual for employees to become frustrated by the IT department’s seemingly slow progress and subscribe to a  SaaS offering themselves without considering the impact this decision will have on the rest of the business.

Other problems arise when boards rush to embrace cloud without having defined a comprehensive vision and strategy that takes into account existing business processes, employees’ skills, company culture, and legacy IT infrastructure.

While a cloud-first approach might get off to a cracking start, without that clear company-wide vision and strategy, it is destined to lose momentum fast.

The chaotic environments resulting from these ad-hoc approaches have far-reaching consequences for an organisation’s corporate governance, purchasing, and IT service integration processes.

Good cloud governance

Where governance is concerned, it is unlikely there will be full visibility of what cloud services are being consumed where, and whether appropriate controls and standards are being met.

This problem is exaggerated in highly-regulated industries, such as financial services, where organisations are required to demonstrate they are: mitigating risk, managing IT security appropriately, managing audits and suppliers effectively, and putting appropriate controls in place to ensure compliance with regulations around data sovereignty and privacy such as the EU GDPR.

Financial services firms also need to demonstrate they are managing material outsource risks effectively, in order to comply with FCA regulations.

The uncontrolled purchase and use of SaaS or PaaS services without the appropriate level of IT engagement will also throw up a whole raft of integration, visibility and support headaches.

‘Technology snowflakes’ are another cause for concern. These occur when the same problem is being solved in different ways by different teams, which leads to IT support inefficiencies and additional costs.

Enterprises need to factor in some of the other financial implications of cloud anarchy too. These include a fragmented procurement process that make it difficult to cut the best deal, as well as questions over how teams consuming their own cloud services manage their budgets in the context of consumption-based services.

Embracing a cloud-shaped future

With a clear cloud strategy underpinned by appropriate controls, everyone will have the tools they need to innovate faster. The final piece in the puzzle is to ensure employees are fully engaged, and have the skills required to take advantage of this new approach and tools.

This requires building a company culture that embraces the cloud in a structured way, and promptly plugging any skills gaps in your employees’ knowledge.

With the Sex Pistols’ anthem still ringing in my ears, it occurs to me that Johnny Rotten was half right when he screamed the immortal lines: “Don’t know what I want, but I know how to get it”.

With cloud adoption, it’s important that everyone within the business pogos to the same tune – and that there is agreement up front on what is required.

Without a strong cloud vision and strategy, it’s impossible to know where you’re heading, how you’re going to get there, and when you’ve arrived.


January 15, 2018  11:43 AM

The art of finding (and fixing) cloud faults

Caroline Donnelly Profile: Caroline Donnelly
Cloud outages, Latency, performance

In this guest post, Ron Vermeulen, go-to-market manager for north-west Europe at IT services provider, Comparex, runs through the process of finding and fixing cloud faults

There is no doubt that cloud computing offers huge benefits to organisations, but CIOs must accept and manage the potential barriers to realising its value.

Service faults and latency issues can prove problematic, for example, when the application in question is business-critical. They can also cost organisations time and money, and have a negative impact on the end-user experience.

Pinpointing where a performance issue occurs in the first place can also be a challenge.

When on-premise IT infrastructure was de rigueur, it was far easier for organisations to find the source of the problem, which could be down to a misbehaving server in the datacentre, for instance.

It’s not so simple today, because ‘your’ public cloud server is now in someone else’s facility, and the difficulty is compounded because the glitch could be closer to home, rather than the fault of the service provider.

A cloud service might be performing fine, but a network problem could be causing issues at ‘home’. A managed service can often help to lessen this headache by identifying, on behalf of the organisation, where the problem lies in the first place.

Finding cloud faults and fixing them

Fixing the fault is the next hurdle. If the problem is with a supplier’s services (rather than in-house) then another complication is added.

Different Service Level Agreements (SLA) for fixing a fault are in place with all cloud suppliers; and managing the various terms and conditions is a mammoth task.

SLAs governing ‘time-to-repair’ can vary greatly – up to 30 hours in some cases. For a business-critical application this is an unacceptable timeframe.

Organisations can pay for a higher level of SLA to guarantee a rapid fix time, but this is rarely factored into their initial cloud costs. As such, organisations can end up paying more than expected just to keep the lights on.

The flexibility and agility of cloud still make it the first choice for lots of organisations, but when it comes to management, many IT teams have essentially relinquished control over support and maintenance.

It is critical organisations retain visibility across their IT infrastructure and ensure individual SLAs meet the specific needs of their organisation.

Take back control of cloud

Regaining control of a cloud deployment can be achieved by adding an overarching management layer that offers visibility, or engaging a managed service to help implement this.

This means, rather than relying on a vendor to analyse a support ticket, the analysis can begin at home.

Pinpointing an issue can be done in as little as 30 minutes using tools and services available today. This offers an even greater level of control – by introducing a sophisticated management layer or service can actually spot a problem before it happens – so issues can be fixed proactively.

This level of visibility into cloud is a ‘must have’, not just a ‘nice to have’ – particularly as convoluted IT infrastructures become commonplace.

The shift to multi- and hybrid-cloud installations, pointed out by Gartner, is one example of this increasing complexity. The cloud ‘stack’ no longer just encompasses software-, infrastructure- and platform services, but can be made up of six interlocking layers.

Ultimately, ‘out of sight, out of mind’ is not a viable approach to cloud. Ensuring seamless performance and round the clock availability can only be achieved by retaining visibility and control.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: