Ahead in the Clouds


December 17, 2019  9:45 AM

AWS Re:Invent 2019 new product round-up: What CIOs need to know

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Ivaylo Vrabchev, head of professional services at IT services management company, HeleCloud, shares his round-up of the most eye-catching announcements from this year’s AWS Re:Invent user and developer conference. 

More than 65,000 attendees trekked out to Las Vegas in early December for the eighth annual AWS Re:Invent user and developer conference in Las Vegas.

AWS CEO, Andy Jassy joked during his opening Re:Invent keynote that he would need every minute of the allotted three hours to ensure he covered all the new announcements the cloud giant had planned. And he wasn’t wrong, with the keynote highlighting some exciting new services and interesting customer trends from the likes of financial services company, Goldman Sachs and life sciences firm, Cerner.

However, the first big announcement of the show occurred the day before, with AWS confirming its first major move into the quantum computing space, with the arrival of AWS Braket.

The offering allows customers to design their own quantum algorithms from scratch or choose from a set of pre-built algorithms. This will enable, it is claimed, businesses to manage compute resources and establish low-latency connections to the quantum hardware that gives customers the opportunity to explore, evaluate and build expertise for the future.

Gravitating towards Graviton at AWS Re:Invent

Back to the Andy Jassy keynote, and the first announcement of the day saw AWS build on last year’s AWS Graviton 1 with the announcement of a new set of M6g, R6g, C6g instances for EC2 that are powered by ARM-based AWS Graviton 2 processors. The chips promise to provide the same level of performance for much lower costs in tasks like handling user requests in applications, analysing user data, or monitoring performance.

Most significantly the AWS Graviton 2 marks a major leap in performance and capabilities over AWS Graviton 1, providing up to 40% better price performance.

The show also saw AWS continue to break out of the centralised service delivery approach and target improved application latency. A great example of this was the emergence of AWS Local Zones that extend regions into wider geographic areas for lower latency, as well as AWS Wavelength that enables developers to build applications that deliver single-digit millisecond latencies to mobile devices and end-users.

Even more machine learning

Machine Learning (ML) continues to be a key area of focus for AWS, with a slew of announcements on this topic during the keynote, including a new ML web-based IDE, a model monitor for quality, an experiments framework, and a deep learning library for Java. It is clear that ML is penetrating the high level of the application stack and getting closer to the business, with services ranging from CodeGuru for code review automation to Amazon Fraud Detector, that uses ML for fraud detection.

Through Amazon Fraud Detector customers can create a fraud detection model that leverages ML without the customer needing any technical understanding of ML. To generalise, with a number of additional announcements of services that leverage ML under the hood, AWS is clearly moving in the direction of empowering the wider services base with ML, no longer keeping the ML portfolio confined to technology-centric use cases. This will certainly be welcome by businesses, who despite the access to ML tools, have struggled to make sense of many of them in their own business contexts.

In a room full of developers it was no surprise that Amazon CodeGuru received such applause. Amazon CodeGuru helps businesses proactively improve code quality and application performance with intelligent recommendations by leveraging ML. Code Guru helps customers identify and fix code issues such as resource leaks, potential concurrency race conditions and wasted CPU cycles.

An expanding networking portfolio

A major new addition to the cloud giant’s networking portfolio emerged in the form of AWS Transit Gateway, which now supports the ability to establish peering connections; a feature that will benefit organisations with hybrid and multi-region multi-VPC architecture.

The ability to peer Transit Gateways between different AWS Regions enables customers to build global networks spanning multiple geographic and jurisdiction areas. Traffic using inter-region Transit Gateway peering always uses the AWS global network, and user traffic never traverses the public internet, thereby remaining secure with predictable network performance. Inter-region Transit Gateway peering encrypts inter-region traffic with no single point of failure.

The AWS Transit Gateway Network Manager simplifies the monitoring of global networks providing a global view of the entire organisational network. The service is already integrated with products from AWS partners such as Cisco, Aruba, Silver Peak, and Aviatrix that have provisioned their software-defined wide area network (SD-WAN) devices to connect with Transit Gateway Network Manager seamlessly in only a few clicks.

Such integration allows AWS customers to visualise their global network by means of a topology diagram or a geographical map.

Identity and Access Management

AWS are always looking for improvements as well as new services and features that could help their customers further improve their security posture in the cloud, and – in line with that – came the arrival of AWS IAM Access Analyzer.

This allows security teams to set policies to ensure only intended individuals can access resources. AWS Access Analyzer simplifies granular control to specific resources and how they can be accessed across the entire Cloud environment.

An IT pro’s summary

The HeleCloud team and I had a fantastic time at Re:Invent. The vast array of sessions that were on offer, coupled with the workshops available, meant it was a fantastic opportunity to learn and get hands on with AWS services and its community. We’ll be keeping a close eye on how these services are adopted and implemented into businesses throughout the next 12 months.

December 10, 2019  11:29 AM

AWS Re:Invent 2019: The kids’ take on the public cloud’s biggest tech show

Caroline Donnelly Profile: Caroline Donnelly
AWS, AWS ( Amazon Web Services ), Skills

Official estimates suggest 65,000 people made the trek out to Las Vegas this year for the 2019 Amazon Web Services (AWS) Re:Invent user and developer conference, and among them were five tech-savvy teenagers from Hertfordshire-based boarding school, Bishop’s Stortford College.

The quintet were gifted the chance to attend the show as a reward for winning the public cloud giant’s skills-focused Get IT competition with a speech-to-text app geared towards improving the classroom learning experience for hearing-impaired children.

The app – dubbed Connect Hearo – is connected to a microphone held or attached to a teacher, whose utterings are picked up and converted to text by the app in real-time. It is then published to a mobile device so that a child with hearing problems can read along and keep up with what is being taught.

Connect Hearo was pitted against 130 submissions from 36 schools, in response to a competition brief set by the AWS Get IT team that called on them to develop an app that could help solve a real-life problem within their school or local community.

AWS Re:Invent – getting kids into IT

The over-arching aim of the Get IT Programme is to inspire young people between the ages of 12 and 13 years old to consider pursuing a career in technology and provide them with insights into the diverse range of roles that exist within the sector.

And the children’s trip to Re:Invent plays an important part in this, as Ibby, Millie, Amy, Charlie and Zak, were invited to attend sessions, participate in the conference’s student track and meet IT leaders who are digitally-transforming their organisations with the help of  AWS.

As part of Computer Weekly’s AWS Re:Invent coverage, the children were invited to share their musings about the show on the Ahead In the Clouds blog, and provide our readers with a fresh perspective (or a reminder) about what it’s like to attend your first ever tech conference.

Read on to find out what they made of Re:Invent 2019…  

Ibby, 13 years old, Connect Hearo user experience expert (photo end right)

The highlight of AWS Re:Invent was our walk around the expo hall after the Student Track finished. We got to look at all of the other stands, and – to me – the most interesting part was the Jam Lounge,  which was a section in the hall showcasing the latest technology, which included a  virtual reality (VR) stand.

We got taught how virtual reality companies were incorporating learning into their interactive games. Basically, what they are doing is teaching children new skills or refining existing ones through VR games. This way they take in more without even realising it, we learned, and this is a process called gamification.

Millie, 14 years old, Connect Hearo software planner (photo second from the right)

I really enjoyed going to AWS re:Invent, and the highlight for me was the augmented reality (AR) demonstration in the Developers’ Lounge. The demonstration showed you how AR could be used to help quickly educate people about things, with one example being learning how to change a tyre on a truck.

Without the AR demonstration, you would have to read a large manual on how to do this job, but – with AR – you can fully understand how to change the tyre in less than 10 minutes. As well as being much quicker, the AR also helps you understand how it works in much more detail.

If you were just reading a manual it could be confusing to understand what the manual was trying to tell you, especially if there were no diagrams. With the AR, it showed you step-by-step what to do with animations on the AR truck so you could see it happening. This part of the expo was particularly interesting, because you could really imagine how learning experiences might be enhanced and improved in the future.

Amy, 13 years old, Connect Hearo user journey designer and author (photo middle)

My favourite part of Re:Invent was at the start of the second day when we were taken into a meeting room with students from a nearby high school. This was part of a programme called ‘AWS Educate’, which aims to get children learning about technology.

Here we learnt how to build a Chatbot based on the Amazon Lex service [Ed – An AWS service that lets users build text- and voice-based conversational interfaces into apps].

This consisted of us programming a conversation where the user ordered something of our choice (in my case shoes), which meant we had to come up with trigger words and responses where we could input variables that would determine what you would order later.

This led onto us coming up with a confirmation text where the answer is either yes or no, both of these leading onto another response. Overall, I found this to be really interesting and will hopefully be something I use in the future.

Charlie, 13 years old , Connect Hearo app statistician (photo end left)

The main aim of the trip was to visit the expo and Student Track, during which we had a talk from Teresa Carlson, the head of public sector for AWS, who introduced the CTO of the Jet Propulsion Lab (JPL) for NASA, Tom Soderstron, whose organisation is a customer of the firm.

We looked at rovers and how they can be incorporated into new fields of work and not just space exploration, and – with this in mind – we brainstormed some new use cases for the technology.

Soderstron explained to us that this was exactly what they were doing at the JPL and that they weren’t just making rovers that went to Mars. He told us how they made lots of different types for different jobs and that they are looking at new designs for the rovers. This was very inspiring and got us thinking about all the different technologies they were looking at.

Zak, 13 years old, oversees microphone logistics and general problem solving for Connect Hearo (photo second from the left)

During our trip to AWS Re:Invent my highlight was meeting a Formula 1 (F1) engineer. His name is Rob Smedley, and he is an amazing engineer, who has worked for F1 teams such as McLaren and Ferrari. His job is basically to try and make F1 cars complete a lap around a track as quickly as possible.

We had an extremely interesting interview, with Rob talking about many topics to do with engineering, such as how tech is being incorporated into the racing industry, to how we need to encourage women to get into engineering.

One thing that I was very interested in was what engines F1 use, and what their power to weight ratio is. All my questions were answered, and I found out that F1 cars use hybrid single turbo V6s and have at least 1000 horsepower and weigh only 750 kilograms. The answers I received were all extremely amazing and were explained in lots of detail.

The experience was definitely my highlight of the trip as I love everything with an engine and at least four wheels. It was great to meet someone with the same interests as me: tech and cars. It was truly a one in a lifetime experience that I will never forget.


November 19, 2019  1:23 PM

Taking the cloud to the edge of space

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Chris Roberts, head of datacentre and cloud at Goonhilly Earth Station, wonders if enterprises are ready to tap into the business opportunities the explosion in data generated by Earth observation satellites is on course to bring?

During the summer, satellite images of fires burning in Brazil’s Amazon basin shocked the world. Using Copernicus Sentinel-3 data, as part of the Sentinel-3 World Fires Atlas, almost 4,000 fires were detected in August 2019, compared with just 1,110 fires at the same time the previous year, according to the European Space Agency.

But the satellite image data revealed a more nuanced story. Most of the fires were burning on agricultural land that was cleared of trees some time ago, but the most intense fires were burning where forest had recently been cleared.

The world is waking up to the insight and detail satellite data offers any organisation prepared to invest in it. Morgan Stanley estimates that the global space industry could generate revenue of more than $1 trillion by 2040, up from $350 billion, currently. Much of the increase comes from the booming satellite sector.

At the same time, the industry is miniaturising. The emergence of small satellites, including cubesats with shorter development cycles and smaller development teams, and consequently, is cutting launch costs.

Demand for data drives Earth observation satellite interest

Driving demand for investment in the satellite industry is a thirst for data. Companies know that, used in the right way, satellite data can help decipher global business, economic, social and environmental trends.

For example, in 2017 US retail giant JC Penney announced the closure of 130 stores after five years of struggling to attain full-year profitability.

But Orbital Insight, a satellite data company, knew something was afoot well before the announcement. It found the number of vehicles in JC Penney car parks had fallen by 5% year-over-year the last quarter of 2016 and was down 10% year-on-year for Q1 of 2017.

The start-up tracks 250,000 parking lots for 96 retail chains across the US. It found JC Penny’s parking lot figures track its stock price closely.

Research from UC Berkeley added to evidence of the value of satellite data. It found that if, during the weeks before a retailer reported quarterly earnings, investors had bought shares when parking traffic increased abnormally and sold its shares when it declined, they would have earned a return that was 4.7% higher than the typical benchmark return.

They also discovered that stock prices did not adjust as sophisticated investors used the satellite data to profit from trading shares. Instead, during the period before earnings reports, the information stayed within the closed loop of those who had paid for it.

According to The Atlantic, hedge-fund managers use machine-learning algorithms that incorporate car counts as well as other types of alternative data emanating from the geolocation capabilities of mobile phones to monitor consumer behaviour, consumer transactions and retail footfall.

This is just one of the many applications of satellite data. Companies use it to price competitors’ assets, economists use it to predict GDP, and political analysts use it to understand global conflict. But this is just the beginning.

Open satellite data is being combined with an increasing range of other data sources to provide insight to any business with an interest in understanding the present or predicting the future, sometimes in unexpected ways.

For example, Google Street View has been used by the Rochester Institute of Technology to find invasive plants, using images from its cars to do something completely unrelated to map building.

In the future, autonomous cars will constantly be uploading and downloading data, and talking to the cities they drive through. Delivery drones and robots will add to the vast pool of insight.

Within the home environment, sensors, cameras and smart speakers are becoming commonplace. Companies such as Audio Analytic in Cambridge, UK, for example, can use speaker systems’ audio data to recognise smoke alarms, breaking windows, babies crying, and dogs barking.

Counting the cloud cost of Earth observation satellite data

Working out how to manage the sheer volume of data being collected while minimising cloud storage costs is becoming a priority. Machine learning and AI will become part of an industrialised effort to sift through and analyse satellite data and combine it with other data sources to provide businesses with new valuable insight.

While the advent of cloud computing has given the illusion of infinite computing power anywhere, the truth is a different story. According to infrastructure provider NEF, the price to transfer 30 TB per month is $2,500. The key is only to move the data you need.

This is one of the reasons that edge computing is on the rise. The concept is that, with an exponential increase in IoT data, organisations will analyse it with machine learning models close to the location where it is produced, rather than shifting volumes of raw data across their infrastructure.

Satellite data enters the ecosystem at Earth stations. Co-locating these Earth stations with high-performance computing is a natural extension, allowing data scientists to build machine-learning models to analyse data close to the point of delivery. This can help businesses and academia to be more responsive and exploit these burgeoning information resources more efficiently. Then they need only move their most valuable, insightful data elsewhere. This is the next frontier in edge computing: computing at the edge of space.


September 26, 2019  1:44 PM

Getting cloud under control: A how-to guide for CIOs

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Dave Locke, chief technology advisor for Europe, Middle East and Africa, at IT service provider World Wide Technology, sets out why organisations need a clear, consistent and easy-to-follow strategy to keep their cloud plans in check.

In August 2006, Google’s then-CEO Eric Schmidt, used the term ‘cloud computing’ at an industry conference to describe the emerging model of running data services and architecture on virtual servers.

“I don’t think people have understood how big the opportunity really is,” Schmidt said, adding that “an awful lot of people are still trying to do stuff the old way.”

In the 13 years that have passed, fuelled by a staggering pace of innovation and fierce competition between hyperscale providers, the public cloud has reached near-ubiquity in business.

So much so CIOs are under increased pressure to fully detach from the ‘old way’ of IT management and drive a cloud strategy better aligned to the business’ ambitions for scale and flexibility.

The appeal of the public cloud for data management and service provision is incontrovertible. It offers businesses the ability to move applications and data from static on-premise environments towards more flexible architectures where they can draw on the near-limitless range of tools and services offered by the various public cloud providers.

In essence, they gain access to a range of choice that wasn’t achievable in their own datacentre. But cloud is not always the best solution.

With the relative ease of accessing these cloud services – combined with the hype and fervour that often surrounds cloud adoption – the risk arises that businesses can find themselves contracted to a greater public cloud presence than they’d anticipated.

Compounding this, different business units may bypass the IT department to source their own services, opening up a range of questions around governance, visibility and risk management, not to mention the high cost of running unnecessary cloud workloads.

Compelling reasons to move to cloud

Driven by competitive factors, businesses are understandably keen to migrate to the cloud. In the race to adopt new technology, however, it can be easy to lose sight of what this means in reality.

Finding best fit service provision starts with the understanding that businesses should look to migrate data and applications to the public cloud only where it is necessary.

Cloud migration is not a goal in itself; they need to look to gain new capabilities that can only be provided by the cloud.

Businesses should therefore assess all the applications and services they require before mapping them to the differing strengths of the providers, along with the appropriate cost structures.

A common misconception when migrating to the public cloud is that it’s cheaper by default. While in some instances this may be the case, having this mentality can be dangerous as without proper management costs can easily spiral, often leading to ‘bill shock’ which can jeopardise the success of the entire cloud strategy.

The approach should be aligned to company goals and the best return on investment, rather than on cost-cutting.

Installing governance

For many organisations, the use of cloud computing has involved a lengthy period of trial and experimentation. Business users, both on an individual and unit basis, have tested various tools and applications to serve their needs. The issue many businesses have found is that coordination of such dispersed efforts is hard to control, leading to inconsistent management protocols.

Indeed, it’s often the case that more senior or technically-savvy personnel are only brought into the discussion when issues emerge, by which point the damage is already done.

The need for clear policies and governance on cloud usage has therefore emerged as an important consideration. This extends from auditing what data needs to be transferred to the cloud to gaining clear visibility on where data is and how it’s accessed.

For the CIO, having a unified data strategy with complete visibility is key to executing the business’ cloud strategy. This requires building a record of important information, recording views and managing access requests, backed up with stringent policies and the use of third party applications to analyse, model and track their data.

Take back control of cloud

Fundamental problems in digital transformation occur when businesses attempt to approach the future using old thinking. Migration to the cloud isn’t about transferring old processes and implementing incremental change.

It presents an opportunity to create new ways of working, not just bolting old methods onto new technology. The businesses that examine their cloud strategy from the ground up will find greater visibility and control.

A well-designed and practical cloud strategy provides businesses with the opportunity to manage their applications and business services on the most suitable platforms to help deliver best in class solutions.

This doesn’t mean racing to degrade their legacy IT infrastructure, but having the level of control required to make intelligent decisions on which components are best suited to each platform, and piece them together into one holistic solution. CIOs need to be conscious that all elements of a hybrid solution come with cost implications and so should weigh the costs and benefits of each part of its technology deployment carefully.

Cloud adoption presents the chance to embrace true change, but it is not a panacea to every business issue. Only with a thorough plan can you determine if transferring to the cloud makes good business sense, and ensure that you gain control of your cloud estate.


August 13, 2019  8:45 AM

Multi-cloud: Reaping the benefits

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Will Grannis, founder and director of the Google Office of the CTO, sets out how adopting a multi-cloud strategy can help enterprises navigate IT challenges around value, risk and cost.

Innovation in the public cloud continues at a staggeringly fast pace. Cloud service providers (CSPs) are continually bringing new services to market, offering a greater depth of choice to enterprise customers. The conventional approach of picking a single cloud vendor and ‘locking-in’ to its infrastructure is effective in the context of such rapid change.

Multi-cloud, however, has emerged as a means for capitalising on the freedom of choice the public cloud offers, not to mention, the differentiated technology each supplier in this space can provide. While the meaning of multi-cloud appears self-evident, in reality it’s more about shifting the role of the organisation’s IT function.

IT needs to shift its focus towards consuming ‘as-a-service’ and having the tools in place to architect across different cloud platforms, and then re-architect over time as business requirements change. In essence, a multi-cloud strategy is a service-first strategy.

Multi-cloud means service-first

For most businesses, this as-a-service approach has become the norm. According to Gartner, by 2021, more than 75% of midsize and large organisations will have adopted a multi-cloud and/or hybrid IT strategy.

The advantages of having a multi-cloud strategy are closely aligned with businesses’ broader IT goals of revenue acceleration, improved agility and time to market, as well as cost reduction. Reduced supplier lock-in, increased scope for cost optimisation, greater resilience and more geographic options all serve to provide a stronger basis for operating applications and workloads at scale.

A greater choice of geographical and virtual locations is important for enterprises that have strict requirements about where data can be stored and processed. The EU’s General Data Protection Regulation (GDPR), for example, included the ‘right to data portability’, specifying that personal (e.g. employee or customer) data must be easily transferred from location to location.

In fact, TechUK’s Cloud2020 Vision Report recommends that organisations ingrain data portability and interoperability into their systems. This means that as more options come online, they are then able to re-evaluate their decisions as necessary.

The aim of a multi-cloud strategy should be to build and maintain capabilities to assess an IT workload and decide where to place it, balancing out a variety of factors. Considerations for placement include how workload location can help IT maximise the value delivered to the business, risks associated with a particular deployment choice, costs of each placement and how well each choice fits with its surrounding architecture.

Multi-cloud as the route to innovation

Adopting multi-cloud means the enterprise is able to embrace the new technologies offered by each CSP it uses. Not only does this create cross-cloud resilience, it also gives the ability to run workloads in specific locations that are not offered by every CSP.

Furthermore, it ensures organisations can take advantage of the latest offerings around emerging technologies like machine learning and the Internet of Things.

This level of flexibility must also be supported by interoperability built into the fabric of the multi-cloud’s design. Using a containerised approach, based on open source technology such as Kubernetes, enterprises can ensure that they are able to fluidly move data between cloud platforms and back into their private environments, in order to make compliance and best-fit service provisioning effortless.

Having this level of interoperability brings significant efficiency benefits. In a single cloud architecture, IT teams needed to be able to write programmes specifically for whichever cloud platform they were using.

Organisations are forced to have several teams of developers with siloed skill sets that cannot be applied to more than one technology stack.

Write once, run anywhere with containers

With containers, they have the cross-platform compatibility to write once, and run anywhere. This is an approach is supported by Google Cloud’s Anthos multi-cloud management platform.

Anthos is an open source based technology, incorporating Kubernetes, Istio, and Knative, that lets teams build and manage applications in existing on-premises environment or in the public cloud of their choice.

As the use of multiple cloud services within a single enterprise IT environment becomes the reality, enabling public and private clouds to work in harmony needs to be an organisational priority.

Finding a right-fit environment for each workload means IT can accelerate application development with performance, cost and resilience at its foundation, helping unlock the true range of possibilities a multi-cloud world has to offer.


August 7, 2019  3:00 PM

Digging into the cloud security arguments of the Capital One data breach

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Dob Todorov, CEO and chief cloud officer, HeleCloud, sets out why it is wrong to declare cloud not fit for business use in the wake of the Capital One data breach.

In two separate statements last week, Capital One announced a data breach involving a large amount of customer data, including credit card information and personal information, stored on the Amazon Web Services (AWS) Simple Storage Service (S3).

The data breach, understandably, sparked a flurry of media interest, resulting in a broad set of statements and interpretations being communicated from many different individuals. Yet, one claim that received an unfair amount of momentum around the Capital One data breach was that it highlights cloud is not secure enough to support businesses.

Here it is important to separate fact from opinion. The truth is, security incidents (including data and policy breaches) happen every day, both in the cloud and conventional datacentre environments. While some get detected and reported on, like the recent Capital One or British Airways breaches, most incidents, unfortunately, go unnoticed and unaccounted for.

The cloud provides businesses with an unprecedented level of security through the visibility, auditability, and access control to all infrastructure components and cloud-native applications.

Cloud is secure and fit for business

With more than fifty industry and regulatory certifications and accreditations, it is worth calling out the AWS public cloud platform for an unrivalled level of security standards – far beyond that offered by any on-premise solution. So, what went wrong for Capital One?

Such a powerful and natively secure platform still requires organisations to architect solutions on it, configure it and manage it securely.  While the security of the cloud is the responsibility of platform providers like AWS, security in the cloud is the responsibility of the users.

For example, based on the available information, it appears that while Capital One had encrypted its data within an AWS S3 bucket to protect confidentiality using the default settings, it made the false assumption that this would protect the personal information against any type of unauthorised access.

What’s more, it appears that specific AWS S3 access control policies were not configured properly, thus allowing either anonymous access from the internet or by using an application with wider than required access permissions. Having the right access policies in place is crucial to protecting resources on AWS S3.

Diary of the Capital One data breach

While the data breach was uncovered by Capital One in July 2019, the data was first accessed by an unauthorised source four months earlier, in March 2019. Given the visibility that AWS provides, including real-time monitoring, Capital One should have detected and contained the incident in real-time. Such capabilities do not exist outside the AWS platform, so having the capabilities and not using them is unacceptable.

In a statement released by the firm, Capital One recognised the limitations of their specific implementation and confirmed the mistakes and omissions made were due to the configuration of the AWS platform.

Such mistakes would have led to a similar outcome event if their systems were in conventional datacentres. The firm stated: “This type of vulnerability is not specific to the cloud. The elements of infrastructure involved are common to both cloud and on-premises datacentre environments.”

Rooting out the weakest link

When it comes to security, businesses are only as strong as the weakest link in the system. The root cause can vary from the configuration of technologies to processes that support insecure practices, to something as simple as human error due to a lack of knowledge, skills, experience.

Of course, there are also cases where humans have deliberately caused incidents for economical gain. However, whether in the cloud or on-premises, all aspects must be taken into consideration and no area left ignored or underinvested when architecting for system security.

Five steps to a secure cloud

To ensure that this does not happen, there are five security and compliance principles that experts insist every business follows. Firstly, all security and compliance efforts require a holistic approach – people, processes and technologies.

Each of these three areas plays a significant part in the processing and protection of data, thus all three must be considered when evaluating your level of security.

This brings us on to the second principle: minimise human impact. Whether deliberate or accidental, most security breaches within businesses are caused by humans. Businesses should invest in more automation to improve the security of systems.

Thirdly, detection is extremely important. Not knowing that an incident has occurred does not eliminate its impact on the business or customers. Sadly, many remain unaware of the incidents that have taken place within their businesses.

The fourth principle seems simple but is often overlooked: security is an ongoing requirement, one that requires maintenance for the whole life span of a system. Lastly, encryption is not always the answer to data protection. Encryption helps to solve a very specific problem: confidentiality. Simply encrypting data, regardless of the algorithms and keys used, does not necessarily make the system more secure.

Cloud is both a very powerful and secure IT delivery platform. To ensure that they are benefiting from these capabilities, businesses must configure their chosen cloud platform to their needs.

Despite cloud skills still not being at the levels they need to be in the UK, many specialist partners exist to ensure that the businesses have the knowledge and experience needed to become more secure than anywhere else via the cloud.

 


August 6, 2019  11:09 AM

Hype vs. reality: Why some organisations are opting to de-cloud

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Justin Day, CEO of hybrid cloud connectivity platform provider Cloud Gateway shares his thoughts one why some enterprises are choosing to pull back from off-premise life and de-cloud their IT.

The cloud promised to transform businesses; bringing speed, security and scale instantaneously. However, the hype around cloud and its benefits has led businesses to become blind to their requirements; moulding their needs to fit the cloud rather than the cloud to fit their needs.

In some cases this has led to organisations migrating a whole business structure to the cloud, which has caused many to suffer big losses.

Thankfully, company leaders are starting to open their eyes to a hybrid approach; understanding where the cloud can best support them and what elements are better suited to staying on-premise.

As such, many enterprises are now moving away from the cloud – also known as de-clouding (or cloud repatriation as some have coined it) – to create a cloud strategy that fits their business objectives and goals.

Costing up the cloud

Service providers shouted from the rooftops about cloud’s cost savings and benefits compared, to owning and operating datacentres. And, while it’s true that (typically) on premise datacentres are expensive, these costings are clear in comparison to the hidden costs of the cloud.

Cloud providers hide high costs with desirably low on-boarding costs and capital expenditure (CAPEX) to cover up the ongoing operating expenditure (OPEX). This is where enterprises get caught out, as it’s not migrating the whole datacentre that will reduce business costs, but using what you need in both the datacentre and in the cloud.

For data in the cloud, it’s imperative that all enterprises are implementing a sturdy cloud governance programme.

Businesses are becoming more savvy in understanding what they need, and Dropbox is a perfect example of this; in two years, the company saved $74.6m off its operational expenses since it ‘de-clouded’ from AWS and into its own datacentres.

Multi-cloud for extra security

Having all business services in the cloud also opens a range of security issues as not every cloud service will provide the right security to protect every aspect of the business. With such varying security requirements across a business, one cloud provider won’t be sufficient, and a multi-cloud approach is best to ensure security for sensitive data.

In contrast, a datacentre provides clear security processes. For example, it is easy to see where the door opens and where the door closes. Currently, with security risks rife, companies are rightly increasingly conscious about their security measures.

Going on-premise, where a company can retain control of their data, provides the satisfaction that both customer and company data is protected.

The term agility has lost its true meaning in recent years, with many cloud providers promising to provide an agile network they cannot deliver. Many lock enterprises into their network and don’t allow for any flexibility, which can be costly and inefficient.

Having a hybrid approach of both on-premise and in the cloud will give companies the ability and confidence to react quickly to customer needs.

De-cloud to de-risk

Furthermore, it reduces and manages risk by avoiding complete downtime if one operator goes down. It’s no longer enough to leave a customer waiting three days before coming back with a solution, or providing a new service for them, so housing services both on premise and in the cloud is hugely beneficial for keeping customers happy.

Migrating to the cloud is often perceived as easy, cheap and efficient. However, with the internet coming under more pressure with ever-increasing traffic from users, there’s only so much pressure it can withhold. And the complexities caused by internet-failure can be very damaging for businesses if not mitigated properly.

The future is absolutely a hybrid model and companies need to take advantage of both cloud and on-premise offerings to make sure they can best serve their customers as well as protect their business in both monetary value and security.


July 24, 2019  12:45 PM

Why the government’s cloud-first policy review should be applauded

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, HPE UK MD Marc Waters sets out why the government’s decision to place its cloud-first policy under review makes sense, as the hybrid IT consumption model continues to take hold in the enterprise.

It was good to read the Crown Commercial Service (CCS) and Government Digital Service (GDS) are reviewing the public cloud-first policy that has guided public sector technology procurement since its introduction in 2013.

The policy has been a success in advancing cloud adoption, and the provision of digital public services. However, it is now widely accepted that a ‘one size fits all’ approach, is not only too simplistic, it is also too restrictive and expensive.

It is also worth noting the move mirrors a decision by the US government at the start of this year to move its Federal Cloud Computing strategy from ‘Cloud First’ to ‘Cloud Smart’.

Ultimately it is about using the right tools for the right workload and having the ability to flex your technology mix. For the public sector, achieving the right cloud mix will help deliver better and more efficient citizen services.

The Hewlett Packard Enterprise (HPE) belief that the future of enterprise IT is hybrid is now widely accepted. Organisations are looking to blend their use of public cloud, with the security, control and cost benefit of a private cloud, and the same holds true for the public sector.

Solving data challenges in the cloud-first era

Another factor to consider as UK government looks toward the future is the importance, impact and incredible growth of data. Future planning requires smart thinking of how data is captured, stored and analysed.

As we all become increasingly mobile and connected, valuable data is being created at the edge. The edge is outside the datacentre and is where digital interaction happens.

Authorities need to be able to manage ‘hot’ data at the edge and use it to make instant, automated decisions, such as improving congestion through smart motorways or using facial-recognition to identify high-risk individuals.

This is in addition to managing ‘cold’ data which is used to analyse patterns and predict future trends, pertaining – for example – to the provision of healthcare and social housing services.

Different data sets benefit from different, connected, data solutions. In a hybrid cloud environment, for example, transaction processing can be managed in the public cloud with data stored privately ensuring control and avoiding the charges to upload data into a public cloud.

Retaining government data in a private cloud removes the lock-in of the charges levied by public cloud providers to take back control of government data. Which just gives more flexibility and options.

Combining a public and private cloud strategy enables customers to demand cloud value for money, not just now, but ongoing. So as a technologist and a taxpayer I see the review by CCS as a hugely positive step forward for the UK government.

Speaking at the Times CEO Summit in London recently Baroness Martha Lane-Fox, who helped to establish the Government Digital Service during the coalition government, noted the digitisation of the State was a ‘job half done’. If that’s the case, at HPE we stand ready to work with the public sector to get the other half done.


June 12, 2019  8:30 AM

Cloud migration: A step-by-step guide

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Paul Mercina, director of product management, at datacentre hardware and maintenance provider ParkPlace Technologies offers a step-by-step guide about how, what and when to move to the cloud.

 The benefits of using cloud are multiple, but the process of migrating a company’s IT systems off-premise (while simultaneously ensuring ‘business as usual’ for staff, customers and the supply chain) is not without its challenges.

While investing in the cloud will result in less on-site hardware and fewer applications for IT managers to manage, this may not necessarily translate into less work to do.

Cloud computing depends on a significant amount of oversight to ensure suppliers are meeting service level agreements, keeping to budget and cloud sprawl is kept to a minimum.

This vital work requires a different skill set, so you will need to consider upskilling and retraining staff to manage their evolving roles.

Developing a robust cloud migration strategy alongside this work will be a must, and there are few things to bear in mind when seeking to create one.

Cloud migration: Preparation is everything

As the global cloud market matures, CIOs are increasingly presenting compelling business cases for cloud adoption. Moving all your IT systems to the cloud instantly may have strong appeal, but in reality, this is unrealistic. Not everything can or should be moved, and you will also need to consider the order of migration and impact on business and staff.

Considering the unique needs of your organisation will be critical to developing a plan that unlocks the benefits of the cloud without compromising security, daily business activities, existing legacy systems or wasting budget.

Many applications and services are still not optimised for virtual environments, let alone the cloud. Regardless of how ambitious a company’s cloud strategy is, it’s likely you will have a significant datacentre footprint remaining to account for important data and applications.

Supporting these systems can be an ongoing challenge, particularly as organisations place more importance, budget and resource into the cloud.

Cloud migration: The importance of interim planning

Mapping a cloud migration strategy against long-, mid- and short-term goals can be helpful. The long term plan may be to move 80% of your applications and data storage to the cloud; however in the short term you will need to consider how you will maintain accessibility and security of existing data, hardware and applications while cloud migration takes place.

Third party suppliers can help maintain legacy systems and hardware during the transition to ease disruption and ensure business continuity.

In line with this, cloud migration will inevitably involve the retirement of some hardware. From a security perspective it’s imperative to ensure any stored data is secured to avoid exposing your organisation to the risk of data breaches.

Many organisations underestimate hard drive-related security risks or assume incorrectly that routine software management methods provide adequate protection.

Cloud migration: Meeting in the middle

Moving to the cloud often creates integration challenges, leaving IT managers to find ways to successfully marry up on-premise hardware with cloud-hosted systems.

In many cases, this involves making sure the network can handle smooth data transmissions between various information sources. However, getting cloud and non-cloud systems to work with one another can be incredibly difficult, involving complex projects that are not only difficult to manage, but also complicated by having fewer resources for the internal facility.

Cloud migration: Budget management challenges

With more budget being transferred off site for cloud systems and other IT outsourcing services, many IT managers are left with less to spend on their on-site IT infrastructure.

The financial pressure mounts as more corporate datacentres need to take on cloud attributes to keep up with broad technology strategies. Finding ways to improve IT cost efficiency is vital to addressing internal data centre maintenance challenges.

Cloud migration: Post-project cost management

Following a cloud migration, retained legacy IT systems age, so it is worth investigating if enlisting the help of a third-party maintenance provider might be of use.

A third-party maintenance provider can give IT managers the services they need, in many cases, for almost half the cost. The end result is that IT teams can have more resources to spend on the internal datacentre and are better prepared to support the systems that are still running in the on-site environment.

While hardware maintenance plans may not solve every problem, they provide a consistent source of fiscal and operational relief that makes it easier for IT teams to manage their data storage issues as they arise.


May 28, 2019  10:58 AM

The people vs. Amazon: Weighing the risks of its stance on facial recognition and climate change

Caroline Donnelly Profile: Caroline Donnelly

Amazon’s annual shareholder meeting appears to have highlighted a disconnect between what its staff and senior management think its stance on facial recognition tech and climate change should be.

When in need of a steer on what the business priorities are for any publicly-listed technology supplier, the annual shareholders’ meeting is usually a good place to start.

The topics up for discussion usually provide bystanders with some insight into what issues are top of mind for the board of directors, and the people they represent: the shareholders.

From that point of view, the events that went down at Amazon’s shareholder meeting on Wednesday 22 May 2019 paint a (potentially) eye-opening picture of the retail giant’s future direction of travel.

Of the 11 proposals up for discussion at the event, there were a couple of hot button issues. Both were raised by shareholders, with the backing of the company’s staff.

The first relates to whether Amazon should be selling its facial recognition software, Rekognition, to government agencies. This is on the back of long-held concerns that technologies of this kind could be put to use in harmful ways that infringe on people’s privacy rights and civil liberties.

Shareholders, with the support of 450 Amazon staff, asked meeting participants to vote on whether sales of the software should be temporarily halted. That is until independent verification can be sought that confirms it does not contribute to “actual or potential” violations of people’s private and civil rights.

The second equally as contentious discussion point relates to climate change. Or, more specifically, what Amazon is doing to prepare and prevent it, and drive down its own consumption of fossil fuels. Shareholders (along with 7,600 Amazon employees) asked the company in this proposal to urgently create a public report on the aforementioned topics.

Just say no

And in response Amazon’s shareholders and board of directors voted no. They voted no to a halt on sales of Rekognition. And no to providing shareholders, staff and the wider world with an enhanced view of what it’s doing to tackle climate change.

The company is not expected to publish a complete breakdown of the voting percentages on these issues for a while, so it is impossible to say how close to the 50% threshold these proposals were to winning approval at the meeting.

On the whole, though, the results are not exactly surprising. In the pre-meeting proxy statement, Amazon’s board of directors said it would advise shareholders to reject all of the proposals put up for discussion, not just those pertaining to facial recognition sales and climate change.

Now, imagine for a minute you are a shareholder or you work at Amazon, and you’ve made an effort to make your displeasure about how the company is operating known. And, particularly where the facial recognition proposal is concerned, you get effectively told you are worrying about nothing.

In the proxy statement, where Amazon gives an account as to why it is rejecting the proposal, it says the company has never received a single report about Rekognition being used in a harmful manner, while acknowledging there is potential for any technology – not just facial recognition – to be misused.

Therefore: “We do not believe that the potential for customers to misuse results generated by Amazon Rekognition should prevent us from making that technology available to our customers.”

Commercially motivated votes?

A cynically-minded person might wonder if its stance on this matter is commercially motivated, given two of its biggest competitors, Microsoft and Google, are taking a more measured approach to how and who they market their facial recognition technology to.

Microsoft, for example,  turned down a request to sell its technology to a Californian law enforcement agency because it had been predominantly trained on images of white males.  Therefore, there is a risk it could lead to women and minority groups being disproportionately targeted when used to carry out facial scans.

Google, meanwhile, publicly declared back in December 2018 that it would not be selling its facial recognition APIs, over concerns about how the technology could be abused.

Incidentally, Amazon’s shareholder concerns about the implications of selling Rekognition to government agencies were recently echoed in an open letter, signed by various silicon valley AI experts. They include representatives from Microsoft, Google, and Facebook, who also called on the firm to halt sales of the technology to that sector too.

Against that backdrop, it could be surmised that Amazon is trying to make the most of the market opportunity for its technology until the regulatory landscape catches up and slows things down? Or, perhaps, it is just supremely confident in the technology’s abilities and lack of bias?  Who can say?

Climate change: DENIED

Meanwhile, the company’s very public rejection of its shareholder and staff-backed climate change proposal seems, at least to this outsider, a potentially dicey move.

The company’s defence on this front is that it appreciates the very real threat climate change poses to its operations, the wider world and how its operations may contribute towards that. And, to this end, it has already implemented a number of measures to support its transformation into a carbon neutral, renewably-powered and more environmentally-friendly company as a nod to that.

But, where the general public is concerned, how much awareness is there about that? Amazon has a website where it publishes details of its progress towards becoming a more sustainable entity, sure, but it’s also very publicly just turned down an opportunity to be even more transparent on that.

Climate change is an issue of global importance and one people want to see Amazon stand up and take a lead on, and even some of its own staff don’t think it’s doing a good enough right now. For proof of that, one only has to cast a glance at some of the blog posts the Amazon Employees Climate Justice group have shared on the blogging platform, Medium, of late.

Amazon vs. Google: A tale of two approaches

From a competitive standpoint, its rejection of these proposals could end up being something of an Achilles heel for Amazon when it comes to retaining its lead in the public cloud market in the years to come.

When you speak to Google customers about why they decided to go with them over some of the other runners and riders in the public cloud, the search giant’s stance on sustainability regularly crops up as a reason. Not in every conversation, admittedly, but certainly in enough of them for it to be declared a trend.

From an energy efficiency and environmental standpoint, it has been operating as a carbon neutral company for more than a decade, and claims to be the world’s largest purchaser of renewable energy.

It is also worth noting that Google responded to a staff revolt of its own last year by exiting the race for the highly controversial $10bn cloud contract the US Department of Defense is currently entertaining bids for.

Amazon and Microsoft are the last ones standing in the battle for that deal, after Google dropped out on the grounds it couldn’t square its involvement in the contract with its own corporate stance on ethical AI use.

There is a reference made within the proxy statement that, given the level of staff indignation over some of these issues, the company could run into staff and talent retention problems at a later date.

And that is a genuine threat. For many individuals, working for a company whose beliefs align with your own is very important and when you are working in a fast-growing, competitive industry where skills shortages are apparent, finding somewhere new to work where that is possible isn’t such an impossible dream.

And with more than 8,000 of its employees coming together to campaign on both these issues, that could potentially pave the way for a sizeable brain drain in the future if they don’t see the change in behaviour they want from Amazon in the months and years to come.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: