Ahead in the Clouds

Page 1 of 912345...Last »

April 10, 2018  9:04 AM

Using cloud to help public transport operators put passengers first

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Craig Stewart, vice president of product management at cloud integration provider SnapLogic, spells out why the more traditional transport operators need to take a leaf out of their cloud-native competitors’ books to stay relevant and in-touch with changing customer demands. 

The phrase, “you wait ages for a bus, and then three come along at once,” might sound cliché, but speaks volumes about the inherent difficulties that come from running a mass transportation network.

A late train here, a spot of traffic there, and the delicate balance between all the moving parts collapse. As a result, we miss our connecting train, wait 20 minutes for a bus, and then our flight gets cancelled.

In days gone by, these kind of issues were just something most travellers had to put up with, as they had no other option to get around. And so, with little external pressure, there was no need for innovation, and the back-end data management systems underpinning the transport network stagnated.

Things have changed, however. Disruptors like Uber have changed the transportation game and prompted customers to revise up their expectations.

As such, they want real-time travel status updates, alternative route suggestions if things do go awry, and the ability to talk to staff who have a full view of the situation, rather than people who are as clueless about the situation as the travellers.

As it stands, the information infrastructure of UK mass transport networks is not fit for the task, and in dire need of renovation.

Imitate to innovate

Imitation is the sincerest form of flattery, and for traditional transport operators, competing in the new mobility landscape means taking note of what the disruptors are up to.

They need to offer the flexibility that customers now expect, and move away from the rigid approaches to timetables and scheduling of the past.

The transport jargon for this transition is mobility-as-a-service (MaaS). It’s certainly not a reality for traditional transport operators currently, but it’s a hot industry topic that is set to be a key area of focus for the public transport industry over the next few years.

Achieving a new MaaS model will require an acceleration in both technology and mindset, particularly when it comes to better understanding of the needs and expectations of the customer.

Some transport operators have already made encouraging first steps in this direction. Transport for London (TfL), for example, has its Unified API, which allows third-party developers to access its data to create value-added apps and services.

TfL’s website even goes as far as to state: “We encourage software developers to use this data to present customer travel information in innovative ways”.

This attitude, however, is currently the exception rather than the rule, and transport operators need to do more like this to make the traveller’s experience as convenient as possible.

This is something retailers have been striving for with their customer-focused omnichannel initiatives for many years. A similar approach, combining robust CRM systems with big data platforms such as Azure and analytics tools, may help transport providers match service provision to customer needs.

Allowing transport operators to more efficiently manage and operate their assets may also help deliver cost-savings for an industry with historically low margins.

Data sharing for customer caring

As alluded to above, local public transport is a shared endeavour, involving multiple local authorities and various bus and rail operators all working together across a region.

In their quest for a more flexible service structure, the use of and sharing of customer data will be paramount.

It’s all well and good if one operator has a vividly accurate portrait of customers and their needs, shifting their operations accordingly, but this data has to be available across the length of a person’s journey, even if it crosses other operators’ services.

The sharing of data securely across departments and with partners is a key stepping stone in shifting transport’s perception of passengers from mere users to customers.

Although responsibility for a passenger may end when they leave the operator’s service, ensuring consistent quality across the full journey is imperative for the shifting business models of the traditional transport sector.

Cloud as a stepping stone to customer-centric care

Essentially, what public transport operators require is a comprehensive digital transformation initiative that changes how they manage operations and approach their customers.

With so many new systems to deploy and data to integrate, there’s no real alternative option except using cloud to achieve the flexibility and scalability needed to make these changes within a set time frame.

What’s more, only the cloud will ensure future technology advances, particularly around machine learning and the Internet of Things, can be quickly deployed to keep tomorrow’s innovations on track, and the disruptors at bay.

Like many industries before it, transport is guilty of growing complacent in its seemingly privileged position. Now it needs to start treating its passengers more like people and less like cargo, and shape its business for them, rather than the other way around.

April 4, 2018  2:59 PM

Multi-cloud: What enterprises need to know

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Allan Brearley, cloud practice lead at IT services consultancy ECS, takes a look at what enterprises need to bear in mind before they take the plunge on multi-cloud.

There’s no doubt that enterprise interest in multi-cloud deployments is on the rise, as organisations look for ways to run their applications and workloads in the cloud environments where it makes most sense to.

When enterprises move workloads on demand, they can achieve cost advantages, avoid supplier lock-in, support fiduciary responsibility, and reduce risk, but there is a flip side to this coin.

Enterprises in the single-cloud camp counter that multi-cloud introduces unnecessary complexity, increasing management overhead and diluting the value of cloud to the lowest common denominator of commodity infrastructure.

In short, customers taking this approach fail to fully exploit all the advantages the cloud model can provider.

So who’s right?

Multi-cloud to avoid vendor lock-in

One of the most popular arguments we hear in favour of adopting a multi-cloud strategy is to avoid vendor lock-in.  After all, who wants to be tied to one supplier who notionally holds all the cards in the relationship when cloud has made it possible to run workloads and applications anywhere?

In my view, this is a partially-flawed argument, given that large enterprises have always been tied into inflexible multi-year contracts with many of the established hardware and software suppliers.

For some enterprises, multi-cloud means using different providers to take advantage of the best bits of each of their respective platforms.  For others, multi-cloud is about using commodity compute and storage services in an agnostic fashion.

Anyone who opts for full-fat multi-cloud will have access to an extensive choice of capabilities from each supplier, but this comes at a price.

Unless they limit their cloud adoption to the Infrastructure-as-a-Service (IaaS) level, only then will they unlock the benefits that come from having access to near infinite on-demand compute and storage capability, including reduced costs.

But this skimmed-milk version of multi-cloud increases the burden of managing numerous suppliers and technology stacks, limiting the ability to increase pace and agility and hampering the ability to unlock the truly transformative gains of cloud.

Managing a multi-cloud

Many enterprises underestimate the management and technical overheads – plus skillsets – involved in supporting a multi-cloud strategy.  Managing infrastructure, budgets, technology deployments and security across multiple vendors quickly progresses from a minor irritation to a full-blown migraine.

While there are a plethora of tools and platforms designed to address this, there is no single, silver bullet to magic away the pain.

As well as considering the in-house skills sets required for a multi-cloud model, another factor is the location of your data.  With the increasing focus on aggregating data and analysing it in real-time, the proximity of data impacts any decision on the use of single cloud vs multi-cloud vs hybrid.

The costs associated with data ingress and egress between on-premise and multiple cloud providers need to be carefully assessed and controlled.

Best of both

But don’t panic, there is another option. It is possible to secure the best of both worlds by opting for an agnostic approach with a single cloud provider and designing your own ‘get out of jail free’ card that makes it easy to one of its competitors. With a clear exit strategy in place at the outset, you can take advantage of the full capabilities of a single vendor and avoid significant costs.

The exit strategy needs to address the use of higher value services. For example, if you use Lambda or Kinesis in the Amazon Web Services cloud, what options would you have if you decide to move to Microsoft Azure, Google Cloud Platform or even back on-premise?

Always consider the application architecture and if you aim for a loosely coupled stateless design, it will be easier to port applications.

Assuming you extend the use of cloud resources beyond commodity IaaS into higher-level PaaS services, such as AWS’s Lambda, you will develop functions as reactions to events.

This pattern, although labelled as ‘serverless’ can be replicated in a non-serverless environment by making these functions available as microservices.

If you later decide to migrate away from a single provider in future, you can use Google Functions or a combination of microservices and containerisation, for example.

By maintaining a clear understanding of which proprietary services are being consumed, you will make it easier to re-host elsewhere while complying with regulations.

As with most enterprise decisions, there’s no clear-cut right or wrong answer on which leap into the cloud to take. The right approach for your organisation will reflect the need to balance the ambition to adopt new technology and increase the pace of change, with the desire to manage risk appropriately.


March 28, 2018  3:17 PM

Better cloud management: the Ops team vs. machines

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Steve Lowe, CTO of student accommodation finder Student.com, weighs up the risks and benefits of relying on machines to carry out Ops jobs within cloud infrastructures.

The “if it ain’t broke, don’t fix it” mantra is firmly ingrained in the minds of most engineers, and – in practice – it works when your resource is fixed and there is no upside to scaling down.

However, when you are paying by the hour (as is often the case with cloud) or the second, it can make a huge difference.

Assembling a team of engineers that are able to make quick, risk-based decisions using all the information at their disposal is not an easy feat at all.

Worryingly, within modern-day teams, there is often a tendency to think things will work themselves out in a few minutes. Or, if something is working fine, the best approach is to let it run and be safe than scale it down.

People vs. machines

In a high-pressure situation, even some of the best decision makers and quickest thinkers can be hard-pressed to come up with a viable solution within the required period of time.

This is where the emotionless machine wins hands down every time. Wider ranges of data sets can be joined together and analysed by machines that can use that data to make scaling decisions.

On top of that, these decisions are made in a fraction of the time a team of engineers would use. As such, if you tell your machine to follow your workloads, it will do just that.

Another benefit of relying on emotionless machines is their automation and reliability, meaning workloads can be easily repeated and delivered consistently to the required standard.

Here comes Kubernetes

As enticing as all this may sound, especially to organisations wanting to scale-up, the devil is in the implementation. For a number of years, a significant amount of work was required to implement auto-scaling clusters in a sensible way. This was especially the case with more complex systems.

The solution? Kubernetes. As long as your software is already running in containers, Kubernetes makes implementation much simpler and smoother. And autoscaling a Kubernetes cluster based on cluster metrics is a relatively straightforward task.

Kubernetes takes care of keeping services alive and load balancing the containers across the compute environment. And, finally, enterprises get the elastically-scaling compute resource they always dreamed of.

What to do with the Ops crew?

Now the machines have helped the organisation free up all this time, the question of what to do with the people in the Ops team now needs answering.

Well, with cyber attacks increasing in both number and sophistication, there’s never been a better time to move free resources into security. With a little training, the Ops team are perfectly positioned to focus on making sure systems are secure and robust enough to withstand attacks.

With the addition of Ops people with practical hands-on experience, the organisation will be better positioned to tackle future problems. From maintaining and testing patches to running penetration testing, the Ops people will add direct value to your company by keeping it safe, while the machines take care of the rest.


March 23, 2018  11:31 AM

Cloud computing: Past, present and future

Caroline Donnelly Profile: Caroline Donnelly
adoption, cloud, Virtualisation

In this guest post, Susan Bowen, vice president and general manager at managed service provider Cogeco Peer 1, takes a look at the history of cloud computing, and where enterprises are going with the technology.

The late 1960s were optimistic days for technology advancements, reflected headily in the release of Stanley Kubrick’s iconic 2001: A Space Odyssey and its infamously intelligent computer H.A.L. 9000.

Around this time the concept of an intergalactic computing network was first mooted by a certain J.C.R. Licklider, who headed up the Information Processing Techniques Office at the Pentagon’s Advanced Research Projects Agency (ARPA).

Charged with putting the US on the front foot, technologically, against the Soviets, ARPA was a hotbed of forward-thinking that pushed technology developments well beyond their existing limits.

Within this context, Licklider envisioned a global network that connected governments, institutions, corporations and individuals. He foresaw a future in which programs and data could be accessed from anywhere. Sounds a bit like cloud computing, doesn’t it?

This vision actually inspired the development of ARPANET, an early packet switching network, and the first to implement the protocol suite TCP/IP. In short, this was the technical foundation of the internet as we now know it.

It also laid the foundation for grid computing, an early forerunner of cloud, which linked together geographically dispersed computers to create a loosely coupled network. In turn this led to the development of utility computing which is closer to what Licklider originally envisioned.

It’s also closer to what we think of as the cloud today, with a service provider owning, operating and managing the computing infrastructure and resources, which are made available to users on an on-demand, pay-as-you-go basis.

From virtualisation to cloud

Dovetailing with this is the development of virtualisation which, along with increased bandwidth, has been a significant driving force for cloud computing.

In fact, virtualisation ultimately led to the emergence of operations like Salesforce.com in 1999, which pioneered the concept of delivering enterprise applications via a simple website.

These services firm paved the way for a wide range of software firms to deliver applications over the internet. But the emergence of Amazon Web Services in 2002 really set the cloud ball rolling, with its cloud-based services including storage and compute.

The launch of Amazon’s Elastic Compute cloud several years later was the first widely accessible cloud computing infrastructure service, allowing small companies to rent computers to run their own computer applications on.

It was a god send for many small businesses and start-ups helping them eschew costly in-house infrastructures and get to market quickly.

When cloud is not the answer

Of course, cloud went through a ‘hype’ phase and certainly there were unrealistic expectations and a tendency to see the cloud as a panacea that could solve everything.

This inevitably led to poor decision making when crafting a cloud adoption strategy, often characterised by a lack of understanding and under investment.

This led a few enterprises adopt an “all-in” approach to cloud, only to pull back as their migration plans progress, with few achieving their original objectives.

Today expectations are tempered by reality and in a sense the cloud has been demystified and its potential better understood.

For instance business discussions have moved from price-points to business outcomes.

Enterprises are also looking at the cloud on an application by application basis, considering what works best as well as assessing performance benchmarks, network requirements, their appetite for risk and whether cloud suppliers are  their competitors or not.

In short they understand the agility and flexibility of the cloud definitely makes it a powerful business tool, but they want to understand the nut and bolts too.

The evolution of cloud computing

Nearly every major enterprise has virtualised its server estate, which is a big driver for cloud adoption.

At the same time the cloud business model has evolved into three categories, each with varying entry costs and advantages: infrastructure-as-a-service, software-as-a-service, and platform-as-a-service. Within this context private cloud, colocation and hosted services remain firmly part of the mix today.

While cloud has its benefits, enterprises are running into issues, as their migrations continue.

For instance, the high cost of some software-as-a-service offerings has caught some enterprises out, while the risk of lock-in with infrastructure-as-a-service providers is also a concern for some. As the first flush of cloud computing dissipates, enterprises are becoming more cautious about being tied into a service.

The scene is now set for the future of cloud, with a focus on hybrid and multi-cloud deployments. In five years’ time, terminology like hybrid-IT, multi cloud and managed services, will a thing of the past.

It will be a given, and there won’t just be one cloud solution, or one data solution, it will be about how clouds connect and about maximising how networks are used.

Hybrid solutions should mean that workloads will automatically move to the most optimised and cost-effective environment, based on performance needs, security, data residency, application workload characteristics, end-user demand and traffic.

This could be in the public cloud, or private cloud, or on-premise, or a mix of all. It could be an internal corporate network or the public internet, but end-users they won’t know and they don’t care either.

There is an application architecture refactoring that is required to make this happen with existing architectures shifting from classic three-tiers to event driven.

Cloud providers are already pushing for these features to be widely adopted for cloud-native applications. And as enterprises evolve to adapt, hyperscale public clouds, and service providers are adapting to reach back into on-premise data centres.

This trend will continue over the next five to ten years with the consumption nature of the cloud growing in uptake.

While we wouldn’t call this the intergalactic computing network as ARPA’s Licklider originally envisioned, it’s certainly moving closer to a global network that will connect governments, institutions, corporations and individuals.


March 19, 2018  12:05 PM

Data-sharing and cloud: A big data match made in heaven

Caroline Donnelly Profile: Caroline Donnelly
Big Data, cloud, Collaboration, Data sharing

In this guest post, Thibaut Ceyrolle, vice president for Europe, Middle-East and Africa (EMEA) at data warehousing startup Snowflake Computing, makes the business case for using cloud to boost data-sharing within enterprises and beyond.

The phrase, ‘data is the new oil’, continues to hold true for many organisations, as the promise of big data gives way to reality, paving the way for companies to glean valuable insights about their products, services, and customers.

According to Wikibon, the global big data market will grow from $18.3bn in 2014 to an incredible $92.2bn by 2026, as the data generated by the Internet of Things (IoT), social media and the web continues to grow.

Traditional data warehousing architectures and big data platforms, such as Hadoop, which grew out of the desire to handle large datasets are now beginning to show signs of age as the velocity, variety and volume of data rapidly inclines.

To cope with this new demand, the evolution and the innovation in cloud technologies have steadily simmered alongside the growth in data. Cloud has been a key enabler in addressing a vital customer pain-point of big data: how do we minimise the latency of data insight? Data-sharing is the answer.

The business of big data

By channelling big data through purpose-built, cloud-based data warehousing platforms, it can be shared within and between organisations, in real-time, with greater ease and help them better respond to the market.

Through cloud, organisations are able to share their data in a governed and secure manner across the enterprise, ending the segregation and latency of data insights among both internal departments and companies external third-parties.

Previously, analysing data sets was limited to the IT or business intelligence teams that sat within an organisation. But as data continues to grow and become an important ‘oil’ for enterprises, it has caused a shift in business requirements.

For data-driven companies, data is now being democratised across the entire workforce. Different sections within a business such as sales, marketing or legal will all require access to data, quickly and easily, but old big data architectures prevented this.

Instead, the advent of cloud has effectively supported concurrent users, meaning everyone from junior to board level employees can gain holistic insight to boost their understanding of both the business and its customers.

On-premise legacy solutions also struggle to process large datasets, taking days or even weeks to extract and process the data at a standard ready for organisations to view. This data would then need to be shifted to an FTP, an Amazon Web Services S3 bucket or even an email.

But the cloud has become a referenceable space, much like how the internet operates. We are easily able to gain insights by visiting a webpage, and in the same way, the cloud can also be used as a hub to quickly access this data.

Data-sharing enables collaboration

Data-sharing not only improves the speed of data insights, but can also help strengthen cross-company collaboration and communication protocols too.

Take a multinational company, for example, they are often divided into many subsections or have employees based across the globe. Yet – through cloud – data sharing can help bring these separate divisions together, without having to replicate data insights for different regions.

As data volumes increase year-on-year, we also see data sharing evolve in the process. The insatiable desire for data will result in organisations tapping into the benefits of machine learning to help sift through the mountains of information they receive.

Organisations who capitalise on machine learning will also be better positioned to extrapolate the variety of data sources available and glue it together to serve as an interconnected data network.

Most of the data pulled is from cloud-based sources and as a result, organisations can use machine learning to get 360 degree visibility of customer behaviours. This enables organisations to better tailor specific products or services to them based on their preferences. Without the cloud, this simply wouldn’t have been possible.

Walmart, the largest retailer in the world, has grown its empire by capitalising on big data and using machine learning to maximise customer experiences. By using predictive analytics, Walmart stores can anticipate demand at certain hours and determine how many employees are needed at the checkout, helping improve instore experiences.

Other retailers, such as Tesco are similarly using machine learning to better understand the buying behaviours of their customers before products are purchased to provide a more seamless e-commerce experience.

As the data-sharing economy grows, data’s reputation as the new oil will continue to surge in the years ahead. Cloud-based technologies are working hand-in-hand with big data to not only offer key insights, but also serve as a foundation for collaboration and communication between all parties within the business ecosystem.

With more and more organisations drawing on cloud and machine learning, it will only fuel the business decisions made through data, offering unparalleled opportunities to respond to customer demands faster than ever before.


February 28, 2018  4:45 PM

Price caps: Keeping public cloud costs under control

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Richard Blanford, managing director of cloud services provider Fordway, shares his top tips on what IT departments can do to curb public cloud overspend.

As the costs of public cloud services continue to fall, moving services off-premise increasingly looks like a sensible business decision. However, there is much more to running a service than server capacity, and if organisations are not careful their costs can quickly spiral out of control.

To keep public cloud costs in check, in-house IT teams need to educate themselves on cloud charging models, and develop new skills to ensure overspend is avoided.

Get the skills to pay the bills

The first key skill is to ensure you understand the characteristics and requirements of the service or application being moved to the cloud.

Software-as-a-Service (SaaS) is pretty straightforward, whereas Platform-as-a-Service (PaaS) and particularly Infrastructure-as-a-Service (PaaS) are where design expertise is needed to ensure the applications and services being moved fit with your target cloud provider.

Secondly, once migrated, your team will need the ability to decipher cloud usage reports and invoices.  As well as the known costs for servers and storage, there will be additional ones for ancillary requirements such as IP addresses, domain resilience and data transfers into, out of and between servers.

As an example, for an IaaS instance on Amazon Web Services, there are a minimum of five and potentially eight metered costs that need to be tracked for a single internet-facing server. Microsoft Azure and other public cloud providers are comparable.

The complexity increases if the organisation is hosting complex, multi-server environments.  If other elements are required to run the application, such as security, resilience, management, patching and back-up, these will appear as additional charges.

This is less of an issue with SaaS as it usually has a standard per user per month charge, but with IaaS, and to some extent PaaS, other elements are added on top.

Having worked out the origin of each cost, a number of relatively simple and quick analyses can be done to identify the main reasons for any overspend. The key is to understand the likely use of each application.

Telling apart the cloud types

All public cloud services are metered, which can be both good and bad. If the way your applications work has been factored in when designing services prior to moving them to cloud, you are more likely to avoid unpleasant surprises.

In many services there is a cost per GB each time servers in different domains talk to each other, and a second cost per GB to send data over the internet.

For example, in AWS you are charged if you use a public IP address, and because you don’t buy dedicated bandwidth there is an additional data transfer charge against each IP address – which can be a problem if you create public facing websites and encourage people to download videos.

Every time a video is played, you’ll incur a charge, which may seem insignificant on its own but will soon add up if 50,000 people download your 100MB video, for example. In some applications servers have a constant two-way dialogue so costs that initially seem small can quickly escalate.

The same issue applies with resilience and service recovery, where you will be charged for transferring data between domains.

To understand costs accurately you need to know the frequency of snapshots, how big those snapshots are and the rate of data change.

AWS and Azure charge resilience in different ways; both will keep a copy and bring it up if a host fails, but with AWS you need a different type of service and pay extra, whereas for Azure it is included as standard.

There is also an array of options available for storage, Azure has five options to choose from, and each has different dependencies, as well as a whole new set of terminology.

All these need to be understood, compared and evaluated as part of choosing a service.  If you find storage and back-up costs escalating, IT staff need to take action to prevent the situation getting worse.

The best way to avoid unexpected costs is to look closely at the different types of service available, such as on-demand, reserved or spot instances, before moving to public cloud, and match your workload and requirement to the instance type.

Reserved instances are much cheaper per hour than on-demand, but you are then tied in for a specified period, which means you are unable to move quickly should a better commercial option be introduced. If an application is not optimised for public cloud, consider retaining it in-house or use a managed cloud service with agreed, predictable costs.

Finally, the pace of change in public cloud is very rapid; new and enhanced services are frequently introduced, and not often well publicised. To get best value you need to be prepared to regularly migrate your services between different instance types, classes or families. The cloud provider will not do this for you, so you will need the skills to do it yourself or contract a third party to do for you.


February 22, 2018  9:20 AM

Cryptocurrency and colocation: Breaking down the Bitcoin mining barriers

Caroline Donnelly Profile: Caroline Donnelly
Bitcoin, Colocation

In this guest post, Jack Bedell-Pearce, managing director of service provider 4D Data Centres, shares his thoughts on the role colocation can play in helping cryptocurrency miners boost their profits.  

Despite recent fluctuations and a downturn in value, Bitcoin remains one of the most popular cryptocurrencies.

While many see the potential profits in the likes of Ethereum, Ripple, and other up and coming cryptocurrencies, many more see mining them as a risky, and potentially unprofitable venture.

As the value of cryptocurrencies has increased over time, so have the challenges associated with mining them.

That’s because all cryptocurrencies rely on blockchain: a distributed, peer-to-peer ledger technology that ensures cryptocurrency transactions are validated and secure.

Miners add new blocks to the chain using mining software to identify Secure Hash Algorithms – and in return, they receive cryptocurrency units.

Because miners are effectively competing to be the first to solve a particular Secure Hash Algorithm, budding miners can encounter challenges, because many cryptocurrencies limit how many units are in circulation at any time.

Furthermore, the mining scene for popular cryptocurrencies can be very competitive, making it difficult to get started. And, while less popular currencies may be simpler to mine, there’s no guarantee they will hold their value long-term.

Another key concern is securing enough energy, space and compute resources to power their cryptocurrency mining software in the first place.

Calculating the cost of cryptocurrency

Successful cryptocurrency mining requires a range of important assets that can potentially come at a high cost.

Firstly, all miners need hardware to power their mining applications. Some use a conventional CPU, others favour a customised graphics processor or field-gate programmable array. More recently some miners have started using pre-programmed, application-specific integrated circuits.

Whatever hardware you decide to use, you’ll need to carefully consider how it balances cost and flexibility, and how this stacks up against potential profits.

While mining hardware often has a small physical footprint, the GPUs and ASICs they contain consume vast amounts of power. And when you factor in the additional power cost of keeping the hardware cool, it’s a significant outlay that can cut deep into potential profits.

For example, the bitcoin network currently uses approximately 16TWh of electricity per year, accounting for 0.08% of the world’s energy consumption, and the energy cost of a single transaction could power five households for a day.

Because Secure Hash Algorithms must be submitted to the cryptocurrency network, it’s important for your mining operation to have a stable network connection.

Having a low-latency network connection give users the best possible chance to solve a block and mine the cryptocurrency before anyone else can.

Significant players in the mining community have also been the targets of distributed denial of service (DDoS) attacks in the past. So, if you’re planning on mining seriously, you’ll want to ensure you have a secure network with protective measures in place to keep downtime to a minimum.

Similarly, physical security should also be a key concern if you plan on mining seriously. Without a secure site for keeping your mining hardware safe, you run the risk of theft.

Combining colocation and cryptocurrency

All of these mining requirements add up to a significant investment. While the costs can be substantial, the opportunity for generating revenue is higher than ever – and it’s an opportunity that many will want to capitalise on before the mining market becomes even more saturated.

So how can you cut the costs, reduce the risks of mining, and make the most of the cryptocurrency opportunity?

Colocation can help reduce the risks and costs associated with cryptocurrency mining – and maximise the amount of profit you can make from it.

By moving your mining equipment into a shared data centre managed by a third party you can:

  • Significantly reduce power cost: datacentres are designed to handle massive energy requirements in the most efficient way possible
  • Get a stable, low-latency network:  datacentres offer enterprise-class internet with significantly higher uptimes
  • Secure  valuable mining assets: datacentres can provide a myriad of security measures, ranging from CCTV and guards, to comprehensive DDoS protection.

 


February 20, 2018  1:32 PM

Next-generation cloud: Is serverless the answer?

Caroline Donnelly Profile: Caroline Donnelly
cloud, containers

In this guest post, Naveen Kumar, vice president of innovation, enterprise software and consumer at global design and engineering company Aricent, makes the case for serverless computing.

As far as technology concepts go, containers and serverless computing are attracting fans and baffling IT departments in equal measure. And, while there is a degree of familiarity with the former, many are still trying to work out what role the latter will play within their organisation’s wider IT strategy.

Containers are an evolution of virtualisation technologies, and are abstracted from the underlying host execution environment, allowing them to be ported across other environments and multiple clouds.

Serverless, meanwhile, is an evolution of cloud computing and a bit of a misnomer. It does not mean “no servers”, but that developers no longer have to worry about underlying server and capacity management.

Developers write application functionality using serverless APIs and deploy their functionality on the underlying serverless platform, which – in turn – takes care of the provisioning and scaling up and down of resources based on usage.

The platform automatically manages services, which reduces operational overheads, and developers only pay for what they use.

The concept rocketed in popularity with the introduction of AWS Lambda in 2014, paving the way for the emergence of Azure Functions, Google Cloud Functions and Open Whisk.

Containers and serverless technologies are not mutually exclusive. There is a need for both and a “mix and match” approach can be a very efficient way of managing business and other functions.

If vendor lock-in and fine-grained control are major concerns, then containers could be the way to go. Typically, serverless has been used for limited tasks, such as running regularly scheduled functions or applications with multiple entry points in dedicated Virtual Machines (VM), where containers would be more efficient.

Making the case for serverless

Serverless does have a number of advantages over containers. It is utilised more for functional capabilities than executing business functions, and can automatically scale in response to spikes in demand and comes with a granular cost-model.

Furthermore, it brings operational simplicity and cost savings as programs can be run and terminated on call. Importantly, it makes the product life cycle more efficient and reduces overheads.

It also paves the way for cloud to truly function as a utility because it does not require any VM provisioning or upfront commitments. Enterprises only pay for what they use.

However, it comes with risks and service-level agreements (SLAs). A significant amount of operational overheads, initially managed by enterprises, now move to serverless platform providers.

This means enterprises or application developers have to monitor their apps (error tracking and logging) and SLAs, rather than the underlying infrastructure.

Overcoming the challenges

Organisations planning to ramp up their serverless deployments need to consider a number of factors. They include speed of development and release, the complexity of the solution, the SLA risks, as well as the threat of management and operational overheads and vendor lock-in.

While containers can provide better control of infrastructure and distributed architecture, with serverless you have less control of stack implementation and more focus on the business functions that can be executed.

These factors can be seen as part of an evolutionary process for cloud computing which will ultimately make it easier to use.

Already-virtualised, monolithic applications can be containerised with some effort but moving to serverless computing requires rewriting from the ground up. That is why the latter is worth considering for new applications, where time-to-market is critical.

What to consider

There are several concrete challenges any enterprise should consider when weighing up a serverless solution.

  1. It may require existing applications to be rewritten entirely – if you want to use a provider’s platform and the function on it.
  2. It could contribute towards vendor lock-in, as the application is dependent on the service provider.
  3. Tools for developing and debugging serverless applications are maturing, while standards for moving between different platforms are yet to emerge.
  4. Network latency and costs are additional technical considerations.

The efficiencies gained from serverless increasingly make it seem like an attractive option, despite the potential drawbacks in terms of set-up time, control and quality-of-service. There is no doubt serverless will have a major impact on enterprises over the next few years.

AWS, Google, IBM and Microsoft  have freemium products that allow enterprises to try before they buy, and this is fuelling interest.

Put simply, serverless computing provides one more option for enterprises to use public cloud services.  A significant number of workload jobs currently deployed on public clouds have dynamic demand characteristics.

Serverless adoption will increase as it brings developer efficiencies, performance improvements and speeds up time-to-market. Adoption will further be driven as new or existing applications are re-architected to leverage public cloud services.  Its uptake is strongly linked to overall cloud adoption, which is growing and will continue to grow over time. In short: serverless computing is here to stay.


February 15, 2018  11:05 AM

The Field of (Cloud) Dreams: if you build it, will they come?

Caroline Donnelly Profile: Caroline Donnelly
cloud, Cloud adoption, Cloud strategy

In this guest post, Allan Brearley, cloud practice lead at IT services consultancy ECS and Tesco Bank’s former head of transformation, draws inspiration from a 1989 Kevin Costner movie to advise enterprises on how best to approach their move to the cloud…

Using cloud offers enterprises opportunities to increase business agility and optimise costs, and make it possible for them to compete more effectively with challenger businesses that benefit from being born in the cloud.

However, the cloud journey can mask a significant number of challenges that must be carefully navigated before those benefits can be realised.

Firstly, their strategy needs buy-in from all business stakeholders, and sponsorship at an executive level, ideally with board representation.

In reality, cloud initiatives often start from silos within an organisation, resulting in poor engagement, and failure to realise the potential transformational benefits.

These are not just the rogue instances of shadow IT, or stealth IT, that often spring to mind. In fact, at first sight cloud adoption can often appear to be driven from what might seem very logical areas within a business.

Agility vs. cost in cloud

When there’s a particular focus on cost optimisation, adoption is usually sponsored from within existing infrastructure delivery areas. This might typically be when heads of infrastructure or CTOs identify an opportunity to consolidate on-premise datacentre infrastructure with the expectation of cost savings.

Their thinking is correct. Closing down ageing facilities will reduce capital expenditure significantly. However, any new “off-premise datacentre” must be properly planned for it to deliver sustainable benefits to the business.

From an agility perspective, the journey to the cloud may be rooted in digital or innovation areas.

However, problems often arise when attempts are made to either broaden the reach of cloud within the organisation or when steps are made to take the capability into production.

A failure to have broad organisational engagement and support from all key stakeholders in either of these cases is likely to see best intentions stall at best, and often result in overall failure.

Stakeholder engagement

Without buy-in from application owners, data owners and relevant CIO teams, the promised cost benefits associated with migrating applications off-premise rarely materialise. The danger is you end up investing significant effort to develop a new cloud-based capability but, without an overarching strategy to exploit cloud, it may increase, rather than reduce, your costs.

As in the 1989 Field of Dreams movie, starring Kevin Costner, you need to ensure that once you build it, they will come.

Successful outcomes will only be fully achieved when the cloud strategy is embraced at an organisational level, with engagement across the enterprise. Stakeholders have to buy-in to a clearly defined strategy and understand the motivation for moving applications to the cloud. This has to be driven by a vocal, suitably empowered C-level executive.

It’s not enough to have a single, passionate senior (cheer) leader. This individual also has to sit at the right level in the organisation to have the power to influence and align disparate teams to commit to the same journey. Whilst cloud adoption requires top-down support, it also requires bottom-up evangelism for success.

Indeed, the need to evangelise and sell the benefits of cloud within an enterprise shouldn’t be underestimated.  Establishing some quick wins will not only support integration from a technical and operational perspective, but provide the success stories that can help to shift the cultural needle.A focus on taking the organisation with you on the journey will contribute greatly to a successful outcome.

From disparate to joined up

If you look around your organisation and see an enthusiastic but renegade team heading off under their own steam to build their own Field of Dreams, don’t panic.

It is still possible to bring structure into the cloud adoption programme and reshape it as an organisation-wide, joined-up initiative. Start by taking a step back and bringing all the stakeholders inside the tent.

Clearly it will be more challenging to achieve retrospective buy-in but it is certainly possible to do so, providing the programme has not passed a tipping point.

Ensure you avoid disenfranchising any key stakeholder groups, because if this happens then it will be difficult to recover the situation.

Take a cue from Kevin Costner: don’t stand alone in a field of corn but act decisively to secure the right support and you too could be nominated for awards for your starring role in cloud adoption.


February 2, 2018  12:21 PM

Cloud billing: Cutting through the complexity

Caroline Donnelly Profile: Caroline Donnelly
AWS, cloud, Google, IaaS

In this guest post, Kat Lee, data analyst at service provider Cloudreach, shares some best practice on cloud billing to help enterprises cut costs.

IT cost control and management is a hot topic of debate in enterprises, seeking to drive down the cost of operational expenditure. As the cloud adoption mentality has increasingly become “not if, when”, cloud cost control has become a key budgeting topic.

The majority of enterprises nowadays don’t just choose one cloud service provider, having realised the benefits of adopting a hybrid cloud model. But this now presents its own challenges. Each service provider has a different way of presenting cost and usage reporting, which can lead to complexity and confusion for end users.

Gartner research suggests 95% of business and IT leaders find cloud billing the most confusing element of using public cloud services. This confusion leads to a lack of governance and oversight which, of course, has a financial impact to the company.

Take stock of cloud resources

To get a handle on this, enterprises must get to the point where they know the amount of cloud resources being consumed by their teams.

The first point of action is to create a customised invoicing process to gather all the data related to cloud spend and break it down to a granular level. That way each employee and department can benchmark their usage. This will help the finance team identify any increases in spend as and when they happen and to understand the cause of them.

Cloud usage and cost reporting tools are of critical importance here. These tools are vital to give clarity on usage and cost allocation and give IT finance teams more accurate information when resource planning, avoiding overspend and an angry CIO.

Taking ownership of cloud billing

Amazon Web Services (AWS) and Google Cloud Platform (GCP)’s recent per-second billing could offer customers huge savings, but to take full advantage, end users must understand and own their billing process.

To get a handle on spend, enterprises must breakdown and allocate the costs to the right business group, department, project and individual employee. Resource tagging is absolutely vital in this regard to get true cloud governance in place and understand which project is consuming what resources.

Understanding the numerous ways cloud providers offer cost savings is vital to keeping cloud spend under control too.

AWS, Azure and GCP all offer discounts for committed usage of virtual machines, in varied ways, from upfront annual payments to reserved instances and automatic discounts on long running virtual machines. AWS also offers Spot Instances – a method of purchasing compute time at a significantly reduced cost when there is spare capacity – a virtual machine marketplace powered by supply and demand.

These cost saving opportunities are increasingly complex and the most mature cloud consumers will have dedicated resources focussed on managing costs.

Where next for cloud billing?

In the next 12 months, there is going to be increased growth and maturity in the cloud market. As enterprises increasingly see the inevitability of a move to the cloud, a key differentiator between cloud service providers is going to be how they deliver and bill enterprises for computing resources.

Having transparency and ownership over the billing cycle, and an understanding of the cost optimisation options available is the next step in helping enterprise customers make the most of cloud.


Page 1 of 912345...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: