Ahead in the Clouds

Page 1 of 1012345...10...Last »

July 13, 2018  3:17 PM

Disaster recovery and business continuity: A best practice guide

Caroline Donnelly Profile: Caroline Donnelly

In this guest post Paul Timms, managing director of IT support provider MCSA, shares his thoughts on why enterprises can ill-afford to overlook the importance of business continuity and disaster recovery. 

With downtime leading to reputational damage, lost trade and productivity loss, organisations are starting to realise continuity planning and disaster recovery are critical to success.

Business continuity needs to be properly planned, tested and reviewed in order to be successful. For most businesses, planning for disaster recovery will raise more questions than answers to begin with, but doing the hard work now will save a lot of pain in the future.

Ascertain the appetite for risk

All businesses are different when it comes to risk. While some may view a ransomware attack as a minor inconvenience, dealt with by running on paper for a week while they rebuild systems, whereas others view any sort of downtime as unacceptable.

The individual risk appetite of your organisation will have a significant impact on how you plan and prepare for business continuity. You will need to consider your sector, size, and attitude towards downtime, verses cost and resources. Assessing this risk appetite will let you judge where to allocate resources, and focus your priorities.

Plan, plan and plan some more

To properly plan for disaster recovery, it is critical to consider all aspects of a business continuity event, together with the impact of it, and how to mitigate these.

For example, if power goes down in the organisation’s headquarters, so will the landline phones, but mobiles will still be functional. A way to mitigate this impact would be to reroute the office phone number to another office or a mobile at headquarters. To do that you need to consider where you store the information about how to do that, and who knows where it is.

This is just one example. You need to consider all the risks, and all the technology and processes that you use. Consider the plan, the risk, the solution and where you need to invest and strengthen your plan to ensure your business can still function in the event of a disaster.

Build in blocks and test rigorously

Ideally IT solutions will be built and tested in blocks, so you can consider the risks and solutions in a functional way. You can consider for example your network, WAN/LAN, storage and data protection.

Plans for each then need to be tested in a rigorous way with likely scenarios. What if, for example, a particular machine fails? What happens if the power supply cuts out?  What happens in the case of a crypto virus? Do you have back-ups? Are they on site? Do you have a failover set-up in case of system failure? Is the second system on site or in a completely different geography? What do I with my staff – can they work from home? Are there enough laptops?

These will drive out and validate (or not) assumptions on managing during a business continuity event. For example if your company is infected with a crypto virus and has infected the main servers, it will also have replicated across to your other sites, therefore your only option is to restore from back-ups or have a technology solution that allows you to roll back before the virus was unleashed.

Cloud is not the only answer

It can be tempting to think cloud can solve all the problems, but that is not always the case. Data in the cloud is not automatically backed up and is not necessarily replicated to a second site. These are options on some public cloud services, but they are often expensive and under used, as a result.

Despite being cloud-based, a company’s Office365 environment can still get a virus and become corrupted. If you have not put the IT systems in place to back that data up, then it will be lost. If for example, the cloud goes down, you need to consider a failover system.

The interesting part of this is the public cloud doesn’t go down very often, but when it does it is impossible to tell you how long it will be out of action for. Businesses must therefore consider when to invoke disaster recover in that instance.

Know your core systems

One solution that some companies adopt is running core systems and storage on infrastructure that they know and trust. This means knowing where it is and what it is, so it meets their performance criteria. Businesses also consider how this system is backed up including what network it is plugged into, ensuring it has a wireless router as standby, that the failover system is at least 50 miles away on a separate IP provider, that the replication is tested and that the data protection technology works and is tested.

This gives business much better knowledge and control in a business continuity event such as a power outage. Businesses can get direct feedback about the length of outage meaning they have better visibility and ability to make the right decisions.

Plan and prioritise

When making a plan you need to consider the risks and your objectives. A useful approach towards technology can be to consider how it can help mitigate these risks and help you meet your objectives. When considering budget, there is no upper limit to what you can spend, instead focus on your priorities and then have the board sign them off.

Spending a day brainstorming is a good way of working out what concerns your organisation the most and what will have the most detrimental impact should it go wrong.  Needless to say, something that has a high risk of impact needs to be prioritised.  In terms of the executor of any business continuity plan, as the saying goes, don’t put your eggs in one basket – involving numerous people and hence ensuring more than one person is trained in the business continuity plan will significantly mitigate the impact of any event.

July 11, 2018  11:09 AM

How people, processes and technology determine DevOps success

Caroline Donnelly Profile: Caroline Donnelly
DevOps, DevOps - testing / continuous delivery

In this guest post, Eran Kinsbruner, lead technical evangelist at DevOps software supplier Perfecto, talks about why success in agile software development hinges on getting the people, processes and technology elements all in alignment

In a super-charged digital environment where competition is fierce and speed and quality are crucial, many organisations are incorporating DevOps practices (including continuous integration and continuous delivery) into their software development processes.

These companies know software is safer when people with complementary skills in technical operations and software development work together, not apart. However, to keep succeeding organisations must be committed to on-going self-evaluation and embrace the need to change when necessary.

Keeping it fresh is key to success – and for us, this comes from three primary areas; people, processes and technology.

The people part of the DevOps equation

The developer’s role is constantly evolving, and functions that were once owned by testers or quality assurance (QA) teams now fall firmly under their remit. This means new skills are needed, and it’s important that organisations are committed to upskilling their developers.

For some, this means facilitating the mentoring of testers by highly qualified developers. And for others it means considering a change in software development practices to include Acceptance Test Driven Development (ATDD), which promotes defining tests as code is written.

Test automation becomes a core practice during feature development rather than afterwards. Depending on team skills, implementing Behaviour Driven Development (BDD) (which implements test automation with simple English-like syntax) serves less technical teams extremely well. There are bound to be blind spots between developer, business and test personas – and choosing development practices matched to team skills can contribute to accelerating development velocity.

Leadership is another critical aspect of success in DevOps and continuous testing. Diverse teams and personas call for strong leadership as a unifying force, and a leader’s active role in affecting change is crucial. Of course, part of leadership is to enforce stringent metrics and KPIs which help to keep everyone on track.

The importance of process

Teams must always work to clean up their code and to do it regularly. That includes more than just testing. Code refactoring (the process of restructuring computer code) is important for optimal performance, as is continually scanning for security holes.

It also includes more than just making sure production code is ‘clean’. It’s crucial to ‘treat test code as production code’ and maintain that too. Good code is always tested and version controlled.

Code can be cleaned and quality ensured in several ways. The first is through code reviews and code analysis; making make sure code is well-maintained and there are no memory leaks. Using dashboards, analytics and other visibility enablers can also help power real-time decision making which is based on concrete data – and can help teams deliver quicker and more accurately.

Finally, continuous testing by each feature team is important. Often, a team is responsible for specific functional components along with testing, and so testing code merges locally is key to detect issues earlier. Only then can teams be sure that, once a merge happens into the main branch, the entire product is not broken, and that the overall quality picture is kept consistent and visible at all times.

Let’s talk technology

When there is a mismatch between the technology and the processes or people, development teams simply won’t be able to meet their objectives

A primary technology in development is the lab itself. A test environment is the foundation of the entire testing workflow, including all continuous integration activities. It perhaps goes without saying that when the lab is not available or unstable, the entire process breaks.

For many, the requirement for speed and quality means a shift to open-source test automation tools. But, as with many free and open-source software markets, a plethora of tools complicates the selection process. Choosing an ideal framework isn’t easy, and there are material differences between the needs of developers and engineers, which must be catered for.

A developer’s primary objective is for fast feedback for their localised code changes. Frameworks like Espresso, XCUITest and JSDom or Headless Chrome Puppeteer are good options for this.

A test engineer, on the other hand, concentrates on the integration of various developers into a complete packaged product, and for that, their end-to-end testing objectives require different frameworks, like Appium, Selenium or Protractor. And production engineers are executing end-to-end tests to identify and resolve service degradations before the user experience is impacted. Frameworks such as Selenium or Protractor are also relevant here but the integration with monitoring and alerting tools becomes essential to fit into their workflow.

With such different needs, many organisations opt for a hybrid model, where they use several frameworks in tandem.

People, processes and technology – together

Ultimately, we believe that only by continually re-evaluating people, processes and technology – the three tenets of DevOps – can teams achieve accelerated delivery while ensuring quality. It’s crucial in today’s hyper-competitive landscape that speed and quality go hand in hand, and so we’d advise every organisation to take a look at their own operations and see how they can be spring-cleaned for success.


June 22, 2018  3:23 PM

Why next-generation IaaS is likely to be open source

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Chip Childers, CTO of open source platform-as-a-service Cloud Foundry, makes the case for why the future of public cloud and IaaS won’t be proprietary.

Once upon a time, ‘Infrastructure-as-a-Service’ basically meant services provided by the likes of Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform (GCP), but things are changing as more open source offerings enter the fray.

This is partly down to Kubernetes, which has done much to popularise container technology, helped by its association with Docker and others, which has ushered in a period of explosive innovation in the ‘container platform’ space. This is where Kubernetes stands out, and today it could hold the key to the future of IaaS.

A history of cloud computing and IaaS

History in technology, as in everything else, matters. And so does context. A year in tech can seem like a lifetime, and it’s really quite incredible to think back to how things have changed in just a few short years since the dawn of IaaS.

Back then the technology industry was trying to deal with the inexorable rise of AWS, and the growing risk of a monopoly emerging in the world of infrastructure provision.

In a bid to counteract Amazon’s head start, hosting providers started to make the move from traditional hosting services to cloud (or cloud-like) services. We also began to see the emergence of cloud-like automation platforms that could potentially be used by both enterprise and service providers.

Open source projects such as OpenStack touted the potential of a ‘free and open cloud’, and standards bodies began to develop common cloud provider APIs.

As a follow on to this, API abstraction libraries started to emerge, with the aim of making things easier for developers working with cloud providers who did not just want to rely on a few key players.

It was around this time that many of the cloud’s blue-sky thinkers first began to envisage the age of commoditised computing. Theories were posited that claimed we were just around the corner from providers trading capacity with each other and regular price arbitrage.

Those predictions proved to be premature. At that time, and in the years hence, computing capacity simply wasn’t ready to become a commodity that providers could pass between each other – the implementation differences were simply too great.

That was then – but things are certainly looking different today. Although we still have some major implementation differences between cloud service providers, including the types of capabilities they’re offering, we’re seeing the way forward to an eventual commoditisation of computing infrastructure.

While even the basic compute services remain different enough to avoid categorisation as a commodity, this no longer seems to matter in the way that it once did.

That change has largely come about because of the ‘managed Kubernetes clusters’ used by most public cloud providers now.

The shift has also been happening in the private sector, with many private cloud software vendors adopting either a ‘Kubernetes-first’ architecture or a ‘with Kubernetes’ product offering.

As Kubernetes continues its seemingly unstoppable move towards ubiquity, Linux containers now look likely to become the currency of commodified compute.

There are still implementation differences of course, with cloud providers differing in the details of how they offer Kubernetes services, but the path towards consistency now seems a lot clearer than it did a decade ago.

This more consistent approach to compute now seems as inevitable as the future of IaaS, made possible by the open source approach of Kubernetes.


June 18, 2018  11:24 AM

SaaS vs IaaS vs on-premise: What if moving to the cloud is not the answer?

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Richard Blanford, managing director of G-cloud-listed IT services provider Fordway, advises companies to weigh up the pros and cons of using SaaS and IaaS before going all-in on the cloud.

For many organisations, moving to the cloud makes complete business sense. If I set up a business now, I would run all my IT in the cloud.

But that doesn’t mean a ‘cloud for everything’ policy will work for every company. Most of us are not starting from scratch or working with a small number of relatively straightforward applications. Therefore, we need to consider carefully whether all our applications can be transferred efficiently and cost-effectively to the cloud.

The first step should always be to look for an appropriate SaaS service. This should provide:

  1. A suitable application that can be configured (where and if necessary) and data imported to provide comparable or better functionality to existing applications at a suitable price, paid on a metered basis, ideally per user/per month. It will ideally offer tools and communities so you can benefit from the experiences of those who’ve already implemented it.
  2. A supporting infrastructure which is no longer your responsibility with an appropriate, fit for purpose SLA that meets your business and operational requirements.
  3. The ability to easily and cost-effectively access, import and export data to other applications for analysis and business reporting.

Once you’ve identified such a service, and confirmed it offers the right resilience at the right cost, you can simply consume it, whilst monitoring to ensure you receive what you’ve signed up for.

The best analogy for SaaS billing models is that it is like turning on a tap to obtain water, rather than going to a well to collect it, before readying it for consumption through purification, for example. Good SaaS provides what you need when you need it, and you’re billed for what you use.

Assessing the cloud-use cases

Cloud is also cost-effective for non-live environments where you pay as you use. This includes disaster recovery (DR), where all servers can be held in a suspended state without incurring charges until needed, and test and development environments, where you only pay when your code runs.

All you need to provide is management. Just be aware that different cloud providers’ Platform as a Service (PaaS) offerings have different APIs, so there’s some element of provider lock-in that may come into play.

It’s more difficult to find appropriate SaaS offerings for niche applications and those that need to be customised to align with business processes. Many application providers are developing their own SaaS strategy, but these typically only support the latest version of the software, and many cannot accept customised applications or third-party add-ons.

This can be a particular problem for local authorities and the NHS, who use highly customised applications for services such as parking management, waste collection and medication prescription and management.

We’ve investigated many ‘SaaS’ offers for our customers, and all too often the vendor will simply park and maintain a dedicated version of the user’s software on a public cloud service while charging a premium price.

SaaS vs. IaaS

If SaaS is not an option, there is also IaaS to consider. You simply move your application (as-is or with minor enhancements) to operate from a cloud provider’s infrastructure. This frees you from the need to own, operate and manage the infrastructure hosting it, although you need to retain licences and support from the application provider.

There are two provisos with IaaS. First, each provider has different design rules, and you need to work through their menu of choices to find the right solution for your organisation. This requires a thorough understanding of your environment, such as how many fixed and mobile IP addresses are needed, whose DNS service will be used, how much data will go in and out of the cloud etc.

Think of it as choosing a separate hub, spokes and rim for a bicycle wheel rather than simply buying a complete wheel.

The devil’s in the detail

Many organisations don’t specify their IT to this level of detail, as once they’ve bought their infrastructure they use all the capacity available. In a cloud environment, however, everything is metered, and – unless the service can be specified extremely tightly – it may not work out cheaper than in-house provision. For example, you can reduce costs by reserving instances, but you are then locked in for one or three years, with a significant cancellation charge. A similar issue arises when buying spot instances, and these can be shut down with no notice, so aren’t suitable for business critical services.

Secondly, the cloud provider only provides hosting, including host and hypervisor patching and proactive infrastructure security monitoring. Any other patching (plus resilience, back-up, security and application support and maintenance inside the instance) need to be provided in-house or by contracting third parties. Any scaling up or down has to be done using the cloud provider’s tools, and this quickly becomes complex when most organisations have on average 40 applications. In short, managing IaaS is effectively a full-time job.

Much of this complexity can be hidden by using a managed IaaS service, where a provider handles everything from operating system provision and authentication to patching and back-up, and charges for an agreed number of instances per month. Managed IaaS services effectively offer your legacy application back to you as SaaS.

This complexity should not deter you if you are determined to transfer your applications to cloud. However, it is important to go in with your eyes open, or to find an expert to go in with you. Alternatively, if SaaS is not available and IaaS sounds like too much work at present, there is a solution: configure your IT as a private cloud. You can then continue to run it in-house with everything in place to move it to SaaS when a suitable solution becomes available.


June 8, 2018  9:32 AM

With big data comes big responsibility

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Darren Watkins, managing director at colocation provider VIRTUS Data Centres, on how to help organisations take back control over their big data.

When the story broke in March that 50 million Facebook profiles were harvested for British research firm Cambridge Analytica in a major breach, questions about the misuse of personal data once again hit the headlines.

There has been a barrage of promises in response from European politicians and tech executives, vowing to do their best to tighten up controls over our data and even introduce new laws to punish blatant malpractice.

Facebook itself has been contrite, with public apologies from CEO Mark Zuckerberg and most recently the announcement of a bounty program which will reward people who find cases of data abuse on its platforms.

Using big data for good

Incidents like this undoubtedly fuel public wariness about how commercial organisations use their data, but – on the flip side – those in the technology industry know that data capture can be of enormous benefit.

From improving healthcare to powering shopping, travel and even how we fall in love, ‘big data’ is all around us and it’s here to stay. Indeed the Open Data Institute’s (ODI) recent YouGov poll of British adults revealed nearly half of people said they would share their own data – without restrictions – about their background and health preferences if it helped advance academic understanding of areas such as medicine or psychology.

However, for any organisation that operates with data there is a fine line to tread between innovation and responsibility.

There’s a big move at the moment for companies to embrace the ethos of responsible innovation. For some, this means creating products and services designed to meet humanitarian needs. Facebook’s partnership with UNICEF to better map disaster areas is a great example of this. For others it means everyone in the IT industry should move away from looking at their work from a purely technical point of view and ask how their developments may affect end-users and society as a whole.

When it comes to data applications, responsible innovation is a commitment to balancing the need to deliver compelling, engaging, products and services, with the need to make sure data is stored, processed, managed and used properly. This process starts way away from the headlines, or the CEO statements.

Preparation is key

To avoid falling victim to breaches or scandals, companies must ensure they have the right ‘building blocks’ in place. And this starts with data security.

Simple hacking is where big data shows big weakness, thanks to the millions of people whose personal details can be put at risk with any single security breach. The scope of the problem has grown considerably in a short time. It wasn’t too long ago that a few thousand data sets being put at risk by a hack was a major problem. But in September 2017, Yahoo confirmed it did not manage to secure the real names, date of birth, and telephone numbers of 500 million people. That’s data loss on an unimaginable scale, and for the public, that’s scary stuff.

This, together with the computing power needed for big data applications, puts increasing pressure on organisations’ IT and data centre strategies, and this is the challenge which most urgently needs to be overcome. Indeed, it’s not an exaggeration to say that data centre strategy could be crucial to big data’s ultimate success or failure.

For even the biggest organisations, the cost of having (and maintaining) a wholly-owned datacentre can be prohibitively high. But security concerns can mean a wholesale move to cheap, standard, cloud platforms in a hybrid model – where security may not be as advanced – also isn’t an option.

Instead, the savviest firms are turning to colocation options for their data storage needs, recognising that moving into a shared environment means that IT can more easily expand and grow, without compromising security or performance.

It’s by starting here, in the ‘weeds’ of the data centre, that companies can ensure they’ve got firm control over their biggest asset – the data they have and how they use it.

As the public grows more wary of data breaches, the pressure will (and already has) come to bear on the business community to pay more attention to securing, storing and using data in the right way. Countermeasures that used to be optional are in the process of becoming standard and increased pressure is being put on company’s IT systems and processes. For us, it’s in the datacentre where companies can take firm control – avoid the scandals and make sure that innovation is done right.


May 14, 2018  12:28 PM

Kubecon 2018: The rise and rise of Kubernetes

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Jon Topper, CTO of DevOps and cloud infrastructure consultancy The Scale Factory, on how the growing maturity of Kubernetes dominated this year’s Kubecon.

KubeCon + CloudNativeCon Europe, a three day, multi-track conference run by the Cloud Native Computing Foundation (CNCF), took place in Copenhagen earlier this month, welcoming over 4,300 adopters and technologists from leading open source and cloud native communities.

The increase in popularity of container orchestrator software Kubernetes was a defining theme of this year’s show, as it moves beyond being an early adopter technology, to one that end-user organisations are putting into production in anger.

What is Kubernetes?

Kubernetes provides automation tooling for deploying, scaling, and managing containerised applications. It’s based on technology used internally by Google, solving a number of operational challenges that may have previously required manual intervention or home-grown tools.

This year the Kubernetes project graduated from the CNCF incubator, demonstrating that it is now ready for adoption beyond the early adopter communities where it has seen most use thus far.

Many of the conference sessions at the show reinforced the maturity message, with content relating to grown-up considerations such as security and compliance, as well as keynotes covering some interesting real-life use cases.

We heard from engineers at CERN, who run 210 Kubernetes clusters on 320,000 cores so that 3,300 users can process particle data from the Large Hadron Collider and other experiments.

Through the use of cluster federation, they can scale their processing out to multiple public clouds to deal with periods of high demand. Using Kubernetes to solve these problems means they can spend more time on physics and data processing than on worrying about distributed systems.

This kind of benefit was reiterated in a demonstration by David Aronchick and Vishnu Kannan from Google, who showed off Kubeflow.

This project provides primitives to make it easy to build machine learning (ML) workflows on top of Kubernetes. Their goal is to make it possible for people to train and interact with ML models without having to understand how to build and deploy the code themselves.

In a hallway conversation at the show with a member of the Kubernetes Apps Special Interest Group (sig-apps), I learned there are teams across the ecosystem working on providing higher order tooling on top of Kubernetes to make application deployment of all kinds much easier.

It will eventually be the case that many platform users won’t interact directly with APIs or command line tools at all.

Commodity computing

This ongoing commodification of underlying infrastructure is a trend that Simon Wardley spoke about in his Friday keynote. He showed how – over time – things that we’ve historically had to build ourselves (such as datacentres) have become commoditised.

But spending time and energy on building a datacentre doesn’t give us a strategic advantage, so it makes sense to buy that service as a product from someone who specialises in such things.

Of course, this is the Amazon Web Services (AWS) model. These days we create managed databases with their RDS product instead of building our own clusters of MySQL.

At an “Introducing Amazon EKS” session, AWS employees described how their upcoming Kubernetes-as-a-Service product will work.

Amazon is fully bought into the Kubernetes ecosystem and will provide an entirely upstream-compatible deployment of Kubernetes, taking care of operating the control servers for you. The release date for this product was described, with some hand-waving, as “soon”.

In working group discussions and on the hallway track, it sounded as though “soon” might be further away than some of us might like – there are still a number of known issues with running upstream Kubernetes on AWS that will need to be solved.

When the product was announced at AWS re:Invent last year Amazon boasted (based on the results of a CNCF survey) that 63% of Kubernetes workloads were deployed on AWS.

At this conference, they came with new figures stating that the number had dropped to 57% – could that be because both Google Cloud and Microsoft’s Azure already offer such a service?

Wardley concluded his keynote by suggesting that containerisation is just part of a much bigger picture where serverless wins out. Maybe AWS is just spending more time on their serverless platform Lambda than on their Kubernetes play?

Regardless of where we end up, it’s certainly an exciting time to be working in the cloud native landscape. I left the conference having increased my knowledge about a lot of new things, and with a sense that there’s still more to learn. It’s clear the cloud native approach to application delivery is here to stay – and that we’ll be helping many businesses on this journey in the years to come.


May 3, 2018  10:37 AM

Remote control: Making virtual desktops and applications work in the enterprise

Caroline Donnelly Profile: Caroline Donnelly
VDI

In this guest post, Jack Zubarev, president of application virtualisation software provider Parallels, sets what IT departments can do to get the most out of their virtual desktop deployments.

The advent of cloud computing and ubiquitous, fast internet connectivity means office desktops with locally installed applications are becoming a thing of the past for many employees.

Remote application and desktop delivery can provide an easy way to manage, distribute and maintain business applications. Virtualised applications run on a server, while end users view and interact with them over a network via a remote display protocol. Remote applications can also be completely integrated with the user’s desktop so that they appear and behave like local applications.

Application delivery is more vital than ever, but can also be challenging. End-users want consistent performance with a seamless experience, while IT is focused on efficient and effective management and security at a reasonable cost.

Some specific application delivery challenges include performance and being able to deliver reliable and responsive application availability over a variety of network connections; allowing bring your own device (BYOD) users to securely access applications and data from anywhere, at any time, on any device.

Meanwhile, legacy applications can be centrally managed and delivered alongside modern applications simultaneously on the same device; while ensuring application and data security on devices that remotely access virtual resources and maintaining regulatory compliance and privacy.

The benefit of virtual desktops

Remote application and desktop delivery provides many tangible business benefits, including performance improvement and a reduction in downtime; and can help simplify management and updates tasks, and – of course – reduce costs.

The centralised management system offered by application delivery enables you to effectively monitor and manage the entire infrastructure from a single dashboard. Even when installing new components or configuring a multisite environment, there’s no need to log in to other remote servers.

Managing everything centrally gives you more control and reduced hardware means fewer people are needed to manage it. You don’t have to deal with updates, patches, and other maintenance problems. This simplified IT infrastructure makes IT jobs easier.

Another benefit of application delivery is that you can deliver any Windows application to any remote device. For instance, legacy ERP can be remotely published to iPads, Android tablets, or even Chromebooks. It provides a seamless and consistent end-user experience across all devices.

As applications are installed on the application delivery server and remotely published to client devices, businesses can save significant amounts on hardware and software purchases, as well as licensing and operational costs.

Finding the right system

Look for a system that is easy to deploy and use. After all, the goal is to simplify the management and application delivery to employees – not to make it more complex with another layer.

Ideally, it should work in any environment (on-premise, in the public, private cloud or hybrid cloud) and feature pre-configured components, and a straightforward set-up process, so installation is not too labour-intensive.

The best solutions can cost effectively transform any Windows application into a cloud application that is accessible from web browsers as well as any device, including Android, iPad, iPhone, Mac, Windows, Chromebook, and Raspberry Pi.

A clientless workspace is also important, so users can access published applications and virtual desktops using any HTML5-compatible browser, such as Chrome, Firefox, Internet Explorer (and Edge), or Safari.

Your application and desktop delivery solution should also provide support Citrix, VMware, and Microsoft Hyper-V, so sysadmins can build a VDI solution, using a wide range of technologies, to suit their organisations’ technology and cost requirements.


May 1, 2018  8:52 AM

DevOps top 5 tips: Securing management buy-in

Caroline Donnelly Profile: Caroline Donnelly
cloud, datacentre, DevOps

In this guest post, Pavan Belagatti, a DevOps influencer working at automation software provider Shippable, shares his top five tips for securing senior management buy-in for your team’s agile ambitions.

With the ever-changing technology landscape and growing market competition, being able to adapt is important to give your organisation a competitive advantage and – ultimately – succeed.

While you might have a flawless product, customer requirements are constantly evolving, and if you don’t listen to what they want, there comes the problem. Instead, they’ll simply find a provider that does.

From waterfall to DevOps

Change is the only constant, and it applies to every organisation. In response, the software industry has evolved from following the waterfall model of software development to adopting DevOps methodologies, where the emphasis is on shorter release cycles, on-going integration, and continuous delivery.

And while there are many successful case studies on how companies are applying DevOps to their software delivery and deployment methods, there are many that are still yet to join the bandwagon.

They may be afraid to change and adapt, but risk seeing their growth stagnate if they are resistant to change. If you are a software engineer in such a company, what can you do to make senior management change their minds?

Here are five steps that you can use to persuade your manager and management to adopt a DevOps mindset.

1.  Offer some wider reading on DevOps

Compile a reading list of articles about corporations that have benefited from adopting DevOps, including industry-specific examples, and links to research reports that detail how organisations that have gone down this route have benefited

2. Solve a small, but meaningful problem with DevOps

Find a place where you think your software development is visibly lacking and try to improve it. If you have demonstrable proof employing agile-like methodologies helped fix it, senior management might be more inclined to start experimenting with it on other projects.

In this vein, try not to talk about abstract stuff (“we need to optimise the release cycle”). Instead, create a process with a continuous integration (CI) tool and show them how it works in practice. For instance, if the CI automatically tests the application for errors on every push, and the release to production has shortened from 2 hours to 15 minutes, make that known.

3. Measure the success

Find a way (beforehand) to measure the effect of your efforts (using KPIs, for example) and perform these measurements. That will guide the organisation’s efforts, and provide concrete evidence to your peers or managers to convince them that DevOps is the way to go.

4. Pinpoint business processes that can be improved with DevOps

Model how they can be enhanced and calculate an ROI based on this input versus the “legacy” way of doing things. You can start implementing some practices and create success first. Once that’s done it should be easier to convince the manager to go further.

5. Set out a strategy for introducing DevOps to your organisation

This should feature concrete suggestions to help you create a small but convincing implementation plan that sets out the benefits of adopting this approach to software development and deployment.

Ask your manager where they think the software project in question is heading and let them know you are keen on pushing the DevOps agenda, alongside carrying out your day-to-day duties. And, prepare a presentation for the rest of the organisation, and share what you think the business has to gain from embracing DevOps.


April 10, 2018  9:04 AM

Using cloud to help public transport operators put passengers first

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Craig Stewart, vice president of product management at cloud integration provider SnapLogic, spells out why the more traditional transport operators need to take a leaf out of their cloud-native competitors’ books to stay relevant and in-touch with changing customer demands. 

The phrase, “you wait ages for a bus, and then three come along at once,” might sound cliché, but speaks volumes about the inherent difficulties that come from running a mass transportation network.

A late train here, a spot of traffic there, and the delicate balance between all the moving parts collapse. As a result, we miss our connecting train, wait 20 minutes for a bus, and then our flight gets cancelled.

In days gone by, these kind of issues were just something most travellers had to put up with, as they had no other option to get around. And so, with little external pressure, there was no need for innovation, and the back-end data management systems underpinning the transport network stagnated.

Things have changed, however. Disruptors like Uber have changed the transportation game and prompted customers to revise up their expectations.

As such, they want real-time travel status updates, alternative route suggestions if things do go awry, and the ability to talk to staff who have a full view of the situation, rather than people who are as clueless about the situation as the travellers.

As it stands, the information infrastructure of UK mass transport networks is not fit for the task, and in dire need of renovation.

Imitate to innovate

Imitation is the sincerest form of flattery, and for traditional transport operators, competing in the new mobility landscape means taking note of what the disruptors are up to.

They need to offer the flexibility that customers now expect, and move away from the rigid approaches to timetables and scheduling of the past.

The transport jargon for this transition is mobility-as-a-service (MaaS). It’s certainly not a reality for traditional transport operators currently, but it’s a hot industry topic that is set to be a key area of focus for the public transport industry over the next few years.

Achieving a new MaaS model will require an acceleration in both technology and mindset, particularly when it comes to better understanding of the needs and expectations of the customer.

Some transport operators have already made encouraging first steps in this direction. Transport for London (TfL), for example, has its Unified API, which allows third-party developers to access its data to create value-added apps and services.

TfL’s website even goes as far as to state: “We encourage software developers to use this data to present customer travel information in innovative ways”.

This attitude, however, is currently the exception rather than the rule, and transport operators need to do more like this to make the traveller’s experience as convenient as possible.

This is something retailers have been striving for with their customer-focused omnichannel initiatives for many years. A similar approach, combining robust CRM systems with big data platforms such as Azure and analytics tools, may help transport providers match service provision to customer needs.

Allowing transport operators to more efficiently manage and operate their assets may also help deliver cost-savings for an industry with historically low margins.

Data sharing for customer caring

As alluded to above, local public transport is a shared endeavour, involving multiple local authorities and various bus and rail operators all working together across a region.

In their quest for a more flexible service structure, the use of and sharing of customer data will be paramount.

It’s all well and good if one operator has a vividly accurate portrait of customers and their needs, shifting their operations accordingly, but this data has to be available across the length of a person’s journey, even if it crosses other operators’ services.

The sharing of data securely across departments and with partners is a key stepping stone in shifting transport’s perception of passengers from mere users to customers.

Although responsibility for a passenger may end when they leave the operator’s service, ensuring consistent quality across the full journey is imperative for the shifting business models of the traditional transport sector.

Cloud as a stepping stone to customer-centric care

Essentially, what public transport operators require is a comprehensive digital transformation initiative that changes how they manage operations and approach their customers.

With so many new systems to deploy and data to integrate, there’s no real alternative option except using cloud to achieve the flexibility and scalability needed to make these changes within a set time frame.

What’s more, only the cloud will ensure future technology advances, particularly around machine learning and the Internet of Things, can be quickly deployed to keep tomorrow’s innovations on track, and the disruptors at bay.

Like many industries before it, transport is guilty of growing complacent in its seemingly privileged position. Now it needs to start treating its passengers more like people and less like cargo, and shape its business for them, rather than the other way around.


April 4, 2018  2:59 PM

Multi-cloud: What enterprises need to know

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Allan Brearley, cloud practice lead at IT services consultancy ECS, takes a look at what enterprises need to bear in mind before they take the plunge on multi-cloud.

There’s no doubt that enterprise interest in multi-cloud deployments is on the rise, as organisations look for ways to run their applications and workloads in the cloud environments where it makes most sense to.

When enterprises move workloads on demand, they can achieve cost advantages, avoid supplier lock-in, support fiduciary responsibility, and reduce risk, but there is a flip side to this coin.

Enterprises in the single-cloud camp counter that multi-cloud introduces unnecessary complexity, increasing management overhead and diluting the value of cloud to the lowest common denominator of commodity infrastructure.

In short, customers taking this approach fail to fully exploit all the advantages the cloud model can provider.

So who’s right?

Multi-cloud to avoid vendor lock-in

One of the most popular arguments we hear in favour of adopting a multi-cloud strategy is to avoid vendor lock-in.  After all, who wants to be tied to one supplier who notionally holds all the cards in the relationship when cloud has made it possible to run workloads and applications anywhere?

In my view, this is a partially-flawed argument, given that large enterprises have always been tied into inflexible multi-year contracts with many of the established hardware and software suppliers.

For some enterprises, multi-cloud means using different providers to take advantage of the best bits of each of their respective platforms.  For others, multi-cloud is about using commodity compute and storage services in an agnostic fashion.

Anyone who opts for full-fat multi-cloud will have access to an extensive choice of capabilities from each supplier, but this comes at a price.

Unless they limit their cloud adoption to the Infrastructure-as-a-Service (IaaS) level, only then will they unlock the benefits that come from having access to near infinite on-demand compute and storage capability, including reduced costs.

But this skimmed-milk version of multi-cloud increases the burden of managing numerous suppliers and technology stacks, limiting the ability to increase pace and agility and hampering the ability to unlock the truly transformative gains of cloud.

Managing a multi-cloud

Many enterprises underestimate the management and technical overheads – plus skillsets – involved in supporting a multi-cloud strategy.  Managing infrastructure, budgets, technology deployments and security across multiple vendors quickly progresses from a minor irritation to a full-blown migraine.

While there are a plethora of tools and platforms designed to address this, there is no single, silver bullet to magic away the pain.

As well as considering the in-house skills sets required for a multi-cloud model, another factor is the location of your data.  With the increasing focus on aggregating data and analysing it in real-time, the proximity of data impacts any decision on the use of single cloud vs multi-cloud vs hybrid.

The costs associated with data ingress and egress between on-premise and multiple cloud providers need to be carefully assessed and controlled.

Best of both

But don’t panic, there is another option. It is possible to secure the best of both worlds by opting for an agnostic approach with a single cloud provider and designing your own ‘get out of jail free’ card that makes it easy to one of its competitors. With a clear exit strategy in place at the outset, you can take advantage of the full capabilities of a single vendor and avoid significant costs.

The exit strategy needs to address the use of higher value services. For example, if you use Lambda or Kinesis in the Amazon Web Services cloud, what options would you have if you decide to move to Microsoft Azure, Google Cloud Platform or even back on-premise?

Always consider the application architecture and if you aim for a loosely coupled stateless design, it will be easier to port applications.

Assuming you extend the use of cloud resources beyond commodity IaaS into higher-level PaaS services, such as AWS’s Lambda, you will develop functions as reactions to events.

This pattern, although labelled as ‘serverless’ can be replicated in a non-serverless environment by making these functions available as microservices.

If you later decide to migrate away from a single provider in future, you can use Google Functions or a combination of microservices and containerisation, for example.

By maintaining a clear understanding of which proprietary services are being consumed, you will make it easier to re-host elsewhere while complying with regulations.

As with most enterprise decisions, there’s no clear-cut right or wrong answer on which leap into the cloud to take. The right approach for your organisation will reflect the need to balance the ambition to adopt new technology and increase the pace of change, with the desire to manage risk appropriately.


Page 1 of 1012345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: