In this guest post Amit Singh, president of Google for Work, explains why enterprises need to start adopting a mobile- and cloud-first approach to doing business if they want to remain one step ahead of the competition.
One of the most exciting things happening today is the convergence of different technologies and trends. In isolation, a trend or a technological breakthrough is interesting, at times significant. But taken together, multiple converging trends and advances can completely upend the way we do things.
Netflix is a classic example. It capitalised on the widespread adoption of broadband internet and mobile smart devices, as well as top-notch algorithmic recommendations and an expansive content strategy, to connect a huge number of people with content they love. The company just announced that it has more than 65 million subscribers.
Other examples of new and improved approaches to existing problems abound. As Tom Goodwin, SVP of Havas Media, said recently: “Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate. Something interesting is happening.”
Each of these companies has capitalised on a convergence of various trends and technological breakthroughs to achieve something spectacular.
Some of the factors I see driving change include exponential technological growth and the democratisation of opportunity, as well as the emergence of public cloud platforms that are fast, secure and easy to use. Together, these trends underpin a powerful formula for rapid business growth: mobile plus cloud.
We know the future of computing is mobile. There are 2.1 billion smartphone subscriptions worldwide, and that number grew by 23% last year.
We spend a lot of time on our mobile devices. Since 2014, more internet traffic has come from mobile devices than from desktop computers. Forward-looking companies are building mobile-first solutions to reach their users and customers, because that’s where we all are.
On the backend, the cost of computing has been dropping exponentially, and now anyone has access to massive computing and storage resources on a pay as you go basis because of cloud. Companies can get started by hosting their data and infrastructure in the cloud for almost nothing.
Hence mobile plus cloud. You can use mobile platforms to reach customers while powering your business with cloud computing. You can build lean and scale fast, and benefit automatically from the exponential growth curve of technology.
As computing power increases and costs decrease, cloud platforms grow more capable and the mobile market expands. In this state, technological change is an opportunity.
How cloud challenges the incumbents to think different
Snapchat is one of the best examples of how this can work. It was founded in 2011. The team used Google Cloud Platform for their infrastructure needs and focused relentlessly on mobile. Just four years later, Snapchat supports more than 100 million active users per day, who share more than 8,000 photos every second
The mobile plus cloud formula is exciting, but it also poses challenges for established players. According to a study by IBM, some companies spend as much as 80% of their IT budgets on maintaining legacy systems, such as onsite servers.
For these companies, technological change is a threat. Legacy systems don’t incorporate the latest performance improvements and cost savings. They aren’t benefitting from exponential growth, and they risk falling behind their competitors who are.
This can be daunting, since it’s not realistic for most companies to make big changes overnight.
If you run a business with less than agile legacy systems, here’s one practical way to respond to the fast pace of technological change: foster an internal culture of experimentation.
The cost of trying new technologies is very low, so run trials and expand them if they produce results. For example, try using cloud computing for a few data analysis projects, or give a modern browser to employees in one department of the company and see if they work better.
There are no “one size fits all” solutions, but with an open mind, smart leaders can discover what works best for their team.
It’s important to try, especially as technology becomes more capable and more of the world adopts a mobile plus cloud formula. Those who experiment will be best placed to capitalise on future convergences.
Cloud-championing CIOs love to bang on about how ditching on-premise technologies helps liberate IT departments, as it means they can spend less time propping up servers and devote more to developing apps and services that will propel the business forward.
Google has spent the best part of a decade telling firms to ditch on-premise productivity tools and use its cloud-based Google Apps suite instead. So, the news that it’s moving all of the company’s in-house IT assets to the cloud may have surprised some.
Surely a company that spends so much time talking up the benefits of cloud computing should have ditched on-premise technology years ago, right?
Not necessarily, and with so many enterprises wrestling with the what, when and how much questions around cloud, the fact Google has only worked out the answers for itself now is sure to be heartening stuff for enterprise cloud buyers to hear.
Reserving the right
The search giant has been refreshingly open in the past with its misgivings about entrusting the company’s corporate data to the cloud (other people’s clouds, that is) because of security concerns.
Instead, it prefers employees to use its online storage, collaboration and productivity tools, and has shied away from letting them use services that could potentially send sensitive corporate information to the datacentres of its competitors.
This was a view the company held as recently as 2013, but now it’s worked through its trust issues, and made a long-term commitment to running its entire business from the cloud.
So much so, the firm has already migrated 90% of its corporate applications to the cloud, a Google spokesperson told the Wall Street Journal.
What makes this really interesting is the implications this move has for other enterprises. If a company the size of Google feels the cloud is a safe enough place for its data, surely it’s good enough for them too?
Particularly as Google has overcome issues many other enterprises may have grappled with already (or are likely to) during their own move to the cloud.
Walking the walk
What the Google news should serve to do is get enterprises thinking a bit more about how bought-in the other companies whose cloud services they rely on really are to the idea.
While they publicly talk up the benefits of moving to the cloud, and why it’s a journey all their customers should be embarking on, have they (or are they in the throes of) going on a similar journey themselves?
If not, why not, and why should they expect their customers to do so? If they are (or have), then talk about it. Not only will doing so add some much needed credibility to their marketing babble, but will show customers they really do believe in cloud, and aren’t just talking it up because they’ve got a product to sell.
Myths and misunderstandings around the use and benefits of cloud computing are slowing down IT project implementations, impeding innovation, inducing fear and distracting enterprises from yielding business efficiency and innovation, analyst firm Gartner has warned.
It has identified the top ten common misunderstandings around cloud:
Myth 1: Cloud is always about the money
Assuming that the cloud always saves money can lead to career-limiting promises. Saving money may end up one of the benefits, but it should not be taken for granted. It doesn’t help when all the big daddies of the cloud world – AWS, Google Microsoft – are doing are tripping over each other to cut down prices. But cost savings must be seen as a nice-to-have benefit while agility and scalability should be the top reasons for adopting cloud services.
Myth 2: You have to do cloud to be good
According to Gartner, this is the result of rampant “cloud washing.” Some cloud washing is based on a mistaken mantra (fed by hype) that something cannot be “good” unless it is cloud, a Gartner analyst said.
Besides, enterprises are billing many of their IT projects cloud for a tick in the box and to secure funding from the stakeholders. People are falling into the trap of believing that if something is good it has to be cloud.
There are many use cases where cloud may not be a great fit – for instance, if your business does not experience too many peaks and lulls, then cloud may not be right for you. Also, for enterprises in heavily regulated sector or those operating within strict data protection regulations, a highly agile datacentre within IT’s full control may be a best bet.
Myth 3: Cloud should be used for everything
Related to the previous myth, this refers to the belief that the characteristics of the cloud are applicable to everything – even legacy applications or data-intensive workloads.
Unless there are cost savings, moving a legacy application that doesn’t change is not a good candidate.
Myth 4: “The CEO said so” is a cloud strategy
Many companies don’t have a cloud strategy and are doing it just because their CEO wants. A cloud strategy begins by identifying business goals and mapping potential benefits of the cloud to them, while mitigating the potential drawbacks. Cloud should be thought of as a means to an end. The end must be specified first, Gartner advises.
Myth 5: We need One cloud strategy or one vendor
Cloud computing is not one thing, warns Gartner. Cloud services include IaaS, SaaS or PaaS models and cloud types include private, public or hybrid clouds. Then there are applications that are right candidates for one type of cloud. A cloud strategy should be based on aligning business goals with potential benefits. Those goals and benefits are different in various use cases and should be the driving force for businesses, rather than standardising on one strategy.
Myth 6: Cloud is less secure than on-premises IT
Cloud is perceived as less secure. To date, there have been very few security breaches in the public cloud — most breaches continue to involve on-premises datacentre environments.
Myth 7: Cloud is not for mission-critical use
Cloud is still mainly used for test and development. But the analyst firm notes that many organisations have progressed beyond early use cases and are using the cloud for mission-critical workloads. There are also many enterprises (such as Netflix or Uber) that are “born in the cloud” and run their business completely in the cloud.
Myth 8: Cloud = Datacentre
Most cloud decisions are not (and should not be) about completely shutting down datacentres and moving everything to the cloud. Nor should a cloud strategy be equated with a datacentre strategy. In general, datacentre outsourcing, datacentre modernisation and datacentre strategies are not synonymous with the cloud.
Myth 9: Migrating to the cloud means you automatically get all cloud characteristics
Don’t assume that “migrating to the cloud” means that the characteristics of the cloud are automatically inherited from lower levels (like IaaS), warned Gartner. Cloud attributes are not transitive. Distinguish between applications hosted in the cloud from cloud services. There are “half steps” to the cloud that have some benefits (there is no need to buy hardware, for example) and these can be valuable. However, they do not provide the same outcomes.
Myth 10: Private Cloud = Virtualistaion
Virtualisation is a cloud enabler but it is not the only way to implement cloud computing. Not only is it sufficient either. Even if virtualisation is used (and used well), the result is not cloud computing. This is most relevant in private cloud discussions where highly virtualised, automated environments are common and, in many cases, are exactly what is needed. Unfortunately, these are often erroneously described as “private cloud”, according to the analyst firm.
“From a consumer perspective, ‘in the cloud’ means where the magic happens, where the implementation details are supposed to be hidden. So it should be no surprise that such an environment is rife with myths and misunderstandings,” said David Mitchell Smith, vice president and Gartner Fellow.
Business Continuity is often perceived as a concept only followed by the biggest of big business, but the reality is that the need and corresponding services increasingly underpin everyday life. An invisible safety net making sure important everyday events continue – no matter what is crucial for all verticals. And education is no exception.
In this guest blogpost, Mike Osborne, school governor & head of business continuity at Phoenix IT talks about the importance of business continuity for Ucas.
During the last few weeks, despite the fact that students now have to pay much higher fees for studying, we have seen more people than ever applying for higher education. An extra 30,000 new places were created this year. This has made the competitive battle between universities even more intense as they fight to secure the best students, especially over the clearing period.
For both — the Universities and Colleges Admissions Service (Ucas) and universities — the clearing and application periods are a time when the availability and function of their operations are most visible not just to students and their parents but also the Government and the Media.
In 2011, both universities and students experienced massive problems with the Ucas online system during the clearing and application periods. This year, it’s more important than ever both for Ucas, the universities and students that there are no system disruptions so students can get the offers they need in a timely fashion and universities can fill their spaces.
Until 20th September, when the clearing vacancy search closed, Ucas was put to the test as thousands of students scrambled to get an offer through the clearing system. According to Ucas last year, on the first weekend after A-level results were announced some 20,000 applicants were placed at a university or college through the Clearing system. Considering the critical nature of this period, it’s essential that the admission agency (Ucas) and universities have ICT and Call Centre resources operating effectively, and without interruptions affecting operations.
ICT and call centre systems are vulnerable to a variety of service disruptions, ranging from severe Disasters (e.g. fire) to mild (e.g. short-term software glitches, power or communications loss). Universities and Ucas are now taking out robust ICT contingency plans such as workplace business continuity and Cloud based DRaaS (disaster recovery as a service), to ensure that information processing systems and student data, critical to the university, are maintained and protected against relevant threats and that the organisation has the ability to recover systems in a timely and controlled manner.
With many mid-market companies also seeing the potential of Disaster Recovery using Cloud technology, it’s not surprising that universities and Ucas are spending more time, money and effort on implementing DRaaS plans. DR as a Service allows data to be stored securely offsite and if the right service is selected, also provide near instantaneous system and network recovery.
When added to Call Centre recovery services as part of a Business Continuity Plan, DRaaS offers a convenient and cost effective solution.
With government and Higher Education Funding Council for England (Hefce) imposing fines on institutions for over-recruitment and with student data including unique research projects increasing, it is more essential than ever for universities and Ucas to keep system downtime to a minimum.
Organisations tend to have one of two IT strategies today: those who are already planning and eventually implementing cloud strategy, and those who are going to be doing it soon. But, the options that companies are faced with are dizzying, often contradictory, and usually dangerously expensive. So what’s the best way for organisations to find the ideal cloud service for their specific needs?
Determining what is needed from the cloud will drive what platform organisations should deploy on. Considerations like budget, expected performance, and project timeline all have to be carefully balanced before plunging ahead. Broadly speaking the platform options range from using someone else’s public cloud, such as AWS, to building your own private cloud from scratch. Where an organisation lands on that spectrum will be driven by how they rank the primary factors involved.
In this guest blogpost, Christopher Aedo, chief product architect at Mirantis explains how to evaluate the cloud requirements and pick the right platform
In essence there are seven key factors to address that will help businesses clarify what really matters and enable them to establish their individual cloud requirements. These are:
Control: How much control do you have over the environment and hardware? Make sure the cloud platform you select delivers the level of control you require.
Deployment Time: How long before you need to be up and running? How much time will you burn just sorting out, ordering, racking and provisioning the hardware? It is critical that the cloud platform you choose can deployed in the right amount of time.
Available Expertise : Can your single IT staff member handle the project, or do you need a team of experts poached from the biggest cloud makers? Choose a cloud platform that matches the expertise you have available – or you can afford to bring in.
Performance: In a single server there are so many components impacting performance – from the memory bus to your NIC and everything in between. However performance directly correlates with budget – a larger budget will usually see greater performance. However there is no reason a smaller budget can’t see high performance – providing you select the right option.
Scalability: Your platform of choice should accommodate adding, or reducing, capacity quickly and easily. Will your chosen platform require downtime to scale up or down or can it be executed seamlessly?
Commitment: From no contract “utility pricing” to the long term investment of owning all your gear – the longer you’re tied up, the greater the risk.
Cost: This may be the most important and most difficult factor to account for. You can see it as an output from your other factors, or your ultimate limiter dictating where you’ll make concessions. There are definitely some good ways to maximize your dollar while minimisng your risk as long as you keep your head up and your eyes open.
By addressing these factors early on in the process of implementing a cloud based solution you will save yourself time, resource and budget in the long run. However having addressed what you want the cloud to deliver it is important that you match your requirements with the right type of cloud platform.
Here are the main cloud options:
Option 1: The Public Cloud
The big players here are AWS and RackSpace, but there are other contenders with fewer bells and whistles like DigitalOcean and Linode. These represent the lowest entry barrier (you just need ‘net access and a credit card!) but also offer the least control and the greatest cost increases as you scale up.
The public cloud is priced like a utility offering the opportunity to scale up/down as needed. This is well suited to handling a highly elastic demand, but it’s important to keep an eye on what you’ve spun up.
With a public cloud you get limited access to the underlying hardware, and no visibility into what’s beneath the covers of the cloud – although you will get some flexibility in configuration and near instant deployment of service without the need for any real expert to be involved.
However, generally speaking, you’re going to find relatively low performance with a public cloud with higher performance coming at significantly increased cost. You can also expect to be billed by the minute in return for not being held to any contract. Many will offer discounts with a commitment of some sort, but then you give up the ability to drop resources when you no longer need them.
Option 2: Hosted Private Cloud
There are many well-known vendors offering options in this space, ranging from complete turn-key environments to build-to-order approaches. They will provide the hardware on short-term lease, and will charge you to manage that hardware.
Companies like RackSpace will work with you to architect your environment, and provide assistance in deployment – which could take up to 6 weeks. You’ll need moderate to extreme expertise and your average junior sys-admin is going to be way out of their depth using s uch a service.
Levels of control will vary from high to minimal depending on how much of the platform you manage and deploy yourself. The level of commitment will also vary but the longer you commitment the more likely an alternative platform is to make sense. HPC is not well suited to an elastic demand and upscale in 2-6 weeks – and generally there will be no ‘scale-down’ option.
Option 3: Build your own private cloud (BYPC)
BYPC requires a high level of technical expertise within the business and will present you with the greatest technical and financial risk. However, you will have total control over the hardware design, the network design, and how your cloud components are configured – but expect this to take a year to 18 months to complete.
Your costs in the build-your-own approach can be kept down if performance and reliability are of no concern, or they can (needlessly) go through the roof if you’re not making carefully planned decisions. The performance of BYPC will be entirely dependent on your budget constraints and how successful your architectural planning is.
There are lots of moving pieces, and the risks are tremendous, as you may be committing hundreds of thousands of dollars to your cloud pilot. Ask anyone who’s actually tried this; it’s a lot harder than it looks.
Option 4: Private-cloud-as-a-Service (PCaaS)
PCaaS, such as OpenStack, represents a balance between the value and flexibility of public cloud and the control of private cloud.
PCaaS provides total control over how hardware is used, and that hardware is 100% dedicated to you with a minimum One-day commitment on a rolling contract. As a result of the minimal commitments it can be deployed within a few hours and you will be free to scale the size of your environment up and down at nearly the same pace as if you were on a public cloud.
The costs are higher than a comparable number of VMs in a public cloud, but with no long-term commitment and clear pricing from the start, your financial risks are lower than any other private cloud approach.
You’ll need a moderate skill level with PCaaS but your risks are mitigated because you’re in a managed environment. Whereas, until recently, PCaaS required you to have a reasonable amount of OpenStack knowledge developments such as OpenStack Express have drastically reduced the expertise needed to implement a PCaaS.
Each of these cloud platforms has validity, as well as a real sweet spot, where that particular approach is the only obvious good choice for your business needs. If you properly consider your requirements and how they match with the options available, your cloud project will not end up as a costly mistake.
Microsoft Azure cloud service status website at 5pm BST on Friday, September 26th showed that while the core Azure platform components were working properly, there was “partial performance degradation” on Azure’s HDInsight service for customers in West Europe.
The status website warned that customers may experience intermittent failures when attempting to provision new HDInsight clusters in West Europe.
HDInsight is a Hadoop distribution powered by the cloud. It allows IT to process unstructured or semi-structured data from web clickstreams, social media, server logs, devices and sensors to analyse data for business insights.
Microsoft has assured cloud users that the engineers have identified the root cause of the performance degradation issue and are working out the mitigation steps.
The company has vowed to provide updates every two hours or as events update. I sense a long wait before the weekend beckons for European enterprises’ Azure users.
Doesn’t the NHS use Microsoft Azure HDInsight? Oh yes, it does!
Two large, very large companies that have been under tremendous pressure in the software-defined storage and cloud era – EMC and HP – toyed with the idea of a merger, according to Wall Street Journal but eventually the idea fell apart because of concerns from both HP and EMC on whether their shareholders would give it a nod.
The deal would have created a mega-vendor worth $130bn with HP’s Meg Whitman as the chief executive of the combined entity and EMC’s Joe Tucci as chairman or president.
Ailing EMC has been under pressure from its investors calling for it spin-off its sister company VMware on the prospect that the company will do better if split up.
According to WSJ, the EMC-HP merger talks have been going on for almost a year.
But the combination of two traditional vendors would have only meant more of the same old legacy, complex, slow and big IT offerings. There is an absence of a meaningful synergy but lot of service overlaps.
HP has a bad history in acquisitions. A merger would have been bad news for both companies even though EMC has a better track record of acquisitions and is attempting to redefine itself in the new cloud era.
EMC Corp is far more than EMC – it has all those fingers in the pies of VMware, RSA, VCE, Pivotal and so on: unpicking these or keeping them going forward would be difficult.
Other names mentioned in a merger with EMC include Dell and Cisco Systems.
Mergers are always a hit or miss and more of a risk when the stakes are higher, as in this case. The problem with these traditional vendors is that, in the past, they have tried to address all the aspects of a datacentre and so they have competing products. For example, EMC-owned VMware’s software-defined network (SDN) offerings threaten Cisco’s switches and routers business worth billions.
As one analyst tells me if EMC is really seeking a merger, it should be going for a Rackspace-type platform company (not Rackspace itself as it has ruled itself out now) where EMC can make a bigger play of VMware’s cloud offering, of the whole software-denied everything message, of ViPR and so on.
Or would Tucci go for Cisco? Markets are betting on EMC-Cisco deal with EMC share prices are up 16 cents.
A merger with heavyweight HP would have left a company trying to sell a complex approach into standard customers’ datacentres. Thankfully, it was only a thought.
I had a chance to see Michael Dell – in flesh – for the first time yesterday in Brussels at the Dell Solutions Summit. He delivered a great keynote on Dell’s datacentre strategy, its investment plans and also spoke about all the hot IT topics – software-defined infrastructures, internet of things, security and data protection.
Michael sounded optimistic about Dell’s place in the future IT but what was new was how open Dell has become as a company and its firm commitment to all things that determine the new age IT – software defined, cloud, security, mobile, big data, next-gen storage and IoT.
For one, Michael was candid with the numbers. He said:
- The total IT market is worth $3 trillion and we have a 2% share of it. Only 10 companies have 1% or more share of that $3 trillion market.
- Dell’s business comprises 85% government and enterprise IT and just 15% is end-user focused.
This kind of number-feeding the press and analysts is new at Dell, which, until now, like rest of the service providers in the industry kept business numbers close to its chest.
But that was not all, Michael didn’t hold back from saying a few things that raised a few eyebrows:
- “I wish we hadn’t made some of the acquisitions we did.”
- “As ARM moves to 64-bit architecture, it becomes more interesting,” Michael said. He said the company is open to working with its longstanding partner Intel’s rival for mainstream datacentre products if that’s where the market moved.
- He also said, Dell is a big believer of the software-defined future. “We ourselves are moving our storage IT into a software defined environment.”
- And to those that wrote off the PC industry, Michael said: “We absolutely believe in the PC business, we are consolidating/growing”.
Michael’s optimism and confidence in the company’s future is a far cry from last year when the company’s ailing business strategy forced it to get itself off the public eye.
“Going private has helped us,” he said while speaking in Brussels. “It has enabled us to put our focus 100% on our customers. We have invested more in research, development, innovation and in channels in the past year.”
Dell also seems to be striking a right chord with its customers, channel and analysts as those I spoke to said they like the company a lot and are pleased with how quickly it adapts and listens to its users.
Dell Research will be focused on five areas – software defined IT, next generation storage (NVM, Flash), next gen cooling, big data/analytics, IoT. Analysts say that’s a good bet.
“Dell’s foray into research clearly designed to establish it as an IT innovator as well as a scale/efficiency player,” says Simon Robinson from 451 Research group on Twitter.
Product-wise too it is making progress. Dell has been more creative than its competitors in designing its new servers on the latest Xeon chip. Its 13th-generation PowerEdge servers have capabilities such as NFC for server inventory management, new Flash capabilities, and has more front sockets.
Dell is also being innovative in its enterprise cloud strategies. It is providing the reference architectures, proof of concepts and server technologies to its system integrators to do the cloud implementation for customers. Having catered to the likes of AWS in the past, Dell has used that cloud experience to build reference architectures but gets the channel to implement it.
“We see private cloud as the future of cloud computing,” Michael said. According to him, enterprises in Europe prefer “local” clouds for data sovereignty and privacy issues, so it is supporting local system integrators with local datacentres to build cloud for the customers.
Michael and his company are certainly making the right noises and are investing in the right technologies. But will it increase their ranking in the datacentre (which I see as fourth after Cisco, HP and IBM – in that order), only time will tell.
Also, is it symbolic that Dell held its Solutions Summit party at the world-famous Comic Strip Museum in Brussels – the home of Tintin, Captain Haddock, the Smurfs and Asterix? Don’t know, but I sure did have fun!