Ahead in the Clouds

Page 8 of 8« First...45678

June 25, 2014  12:32 PM

The super world of supercomputers

Archana Venkatraman Profile: Archana Venkatraman
Leipzig, Supercomputer, Tianhe-2

Last Thursday, I met AWS to learn how users are building supercomputers in the cloud and also to see one being created right in front of me!

Unfortunately, the demo didn’t succeed. I don’t know if it was a buggy code or what but Ian Massingham, technology evangelist at AWS wasn’t able to create a supercomputer and he was as disappointed about it as I was if not more.

But Ian had created one the previous evening — “Just ran up my first HPC cluster on AWS using the newly released cfncluster demo,” read Ian’s tweet from the previous day. The link to a demo video AWS sent me subsequently also showed how to get started on cfncluster in 5 minutes.

Amazon cfncluster is a sample code framework, available for free – – to help users run high performance computing (HPC) clusters on AWS infrastructure.

I got to hear how enterprise customers, pharmaceutical companies, scientists, engineers and researchers are build cluster computers on AWS to do some pretty serious tasks such as research on medicine, assessing financial standing of the companies etc, all while saving money (My feature article on how enterprises are exploiting public cloud capabilities for HPC will appear on ComputerWeekly site soon).

And having spent the last two days at the International Supercomputing conference (ISC 2014) in Leipzig, I feel that high-performance computing, hyperscale computing and supercomputers are the fastest growing sub-set of IT. HPC is no more restricted to just science labs but even enterprises such as Rolls Royce and Pfizer are building supercomputers – to analyse jet engine compressors and to research around diseases respectively.

tianhe-2

tianhe-2 (Photo credit: sam_churchill)

Take Tianhe-2, a supercomputer developed by China’s National University of Defense Technology for research which retained its position as the world’s biggest supercomputer.  It has 3120000 cores, runs a performance of 33.86 petaflops/s (quadrillions of calculations per second) and uses 17808kW of power. Or the US DoE’s Titan – a Cray XK7 system running more than 560,000 cores or any of the UK’s top 30 supercomputers. They are all mind-boggling in their size, compute performance and uses.

Whether on the cloud or on-premises, I didn’t hear a single HPC use-case in the last two days that wasn’t cool or awe-inspiring. Imperial College London, Tromso University, Norway, US Department of Energy, Edinburgh University and AWE all use supercomputers to do research and computation around things that matter to you and me. As one analyst told me, “From safer cars to shinier hair, supercomputers are used to solve real-life problems”.

Now I know why Ian was having a hard time to pick his favourite cloud HPC project – they’re all cool.

June 19, 2014  10:47 AM

What Cloud World Forum 2014 tells us about cloud

Archana Venkatraman Profile: Archana Venkatraman
Ebay, eNovance, IBM, JasperSoft, OpenStack, Rackspace, Red Hat, VMware

The sixth annual Cloud World Forum wrapped up yesterday and here’s what the event tells us about the state of cloud IT in the enterprise world.

OpenStack is gaining serious traction

OpenStack’s big users and providers claimed the cloud technology is truly enterprise-ready because of its freedom from vendor lock-in and portability features. Big internet companies such as eBay are running mission-critical workloads on OpenStack cloud.  Even smaller players such as German company Centralway is using open source cloud to power its infrastructure when TV adverts create load peaks.

HP says it is “all in” when it comes to OpenStack. It is investing over $1bn in cloud-related products and services, including an investment in the open-source cloud. RedHat has just acquired eNovance, a leader in the OpenStack integration services for $95m. Rackspace and VMware are ramping up their OpenStack services and IBM has built its cloud strategy around OpenStack.

Skills shortage around developing OpenStack APIs into a cloud infrastructure seems to be the only big barrier hindering its widescale adoption.

Rise of the cloud marketplace

Cloud marketplace is fast becoming an important channel for cloud transactions. According to Ovum analyst Laurent Lachal, company JasperSoft gained 500 new customers in just six months with AWS marketplace. Oracle, Rackspace, Cisco, Microsoft and IBM have all recently launched cloud services marketplaces. 

What it means to the users? Browsing the full spectrum of cloud services will become as easy for customers as browsing apps in the Apple App Store or Google Play. “As cloud matures, established marketplace seems like a logical evolution. It is a new trend but it gives users a wealth of options in a one-stop-shop kind of way,” said Lachal.

Vendor skepticism on the rise

Bank of England CIO John Finch, in his keynote, warned users of “pesky vendors” and cloud providers’ promises around “financial upside of using the cloud”. Legal experts and top enterprise users urged delegates to understand the SLAs and contract terms very clearly before shaking hands with the cloud providers.

Changing role of CIOs

Cloud is leading to the rise of Shadow IT and CIOs must don the role of becoming the broker of technologies and educating enterprise users on compliance and security, it became apparent at the event. Technology integration, IT innovation and service brokerage are some of the skills CIOs need to develop in the cloud era.

Questions around compliance, data protection, security on the cloud remain unanswered

Most speakers focusing on the challenges around cloud adoption mentioned security, data sovereignty, privacy, compliance and vendor-friendly SLAs as its biggest barriers

Not all enterprises using cloud are putting mission-critical apps on public cloud

Lack of trust seems to be the main reason why enterprises are not putting mission critical workloads on public cloud. Bank of England’s Finch just stopped short of saying “never” to public cloud. Take Coca Cola bottling company CIO Onyeke Nchege for instance – he’s planning to put mission critical ERP systems on the cloud but private cloud. EBay runs its website on the OpenStack cloud – but a private version it built for itself. One reason customers cite is that mission critical apps seem to be more static and don’t need fast-provisioning or high scalability.

“It is not always about the technology though. In our case our metadata is not sophisticated enough for us to take advantage of public cloud,” said Charles Ewan, IT director at the Met Office.

But there are some enterprises such as Astra Zeneca (running payroll workloads on public cloud) or News UK that manages its flagship newspaper brands on AWS cloud.

Urgent need for cloud standards in the EU

Lack of standards and regulations around cloud adoption, data protection and sovereignty and cloud exit strategies is making cloud adoption messy. Legal technology experts urged users to be “wise” in their cloud adoption until such time that regulations are developed. But regulators and industry bodies including the European Commission, the FCA and Bank of England are inching closer to developing guidelines and regulatory advice to protect cloud users.

Everyone’s trying to get their stamp on the cloud

The more crowded than ever Cloud World Forum saw traditional heavyweights (IBM, HP, Dell, Cisco) rub shoulders with a slew of new, smaller entrants as well as public cloud poster-boys such as AWS, Google and Microsoft Azure. Technology players ranging from chip providers to datacentre cooling services sellers were all there to claim their place in the cloud world. 


June 12, 2014  3:43 PM

Why some cloud projects fail?

Archana Venkatraman Profile: Archana Venkatraman
Business, Cloud Computing, DevOps, information technology, OpenStack, VMware

I was at a roundtable earlier this week discussing the findings of one enterprise cloud research. The findings are embargoed until June 24, but what struck me the most was the numbers around failed or stalled cloud projects.

And that led me to discuss it more with industry insiders. Here are a few reasons why cloud projects might fail:

  • Using cloud services but not using them to address business needs

One joke doing rounds in the industry goes a bit like this – The IT head tells his team, “You lot start coding, I’ll go out and ask them what they want”.

But the issue of not aligning business objectives with IT is still prevalent. The latest study by Vanson Bourne found that as many as 80% of UK CIOs admit to significant gaps between what the business wants and when the IT can deliver it. While the average gap cited was five months, it ranged between seven to 18 months.

  • Moving cloud to production without going through the SLAs again and again. And again

If one looks at the contracts of major cloud providers, it becomes apparent that the risk, is almost always pushed out on to the user and not on the provider – be it around downtime, latency, availability, and data regulations. It is one thing to test cloud services and quite another to put it out on actual production.

  • Hasty adoption

Moving cloud to production hastily without testing and piloting the technology enough and without planning management strategies will also lead to failure or disappointment with cloud services.

  • Badly written apps

If your app isn’t configured correctly, it shouldn’t be on the cloud. Just migrating badly written apps on to the cloud will not make them work. And if you are not a marque customer, your cloud provider will not help you with it either.

  • Being obsessed with cost savings on the cloud

One expert says – those who adopt cloud for cost savings fail; those who use it to do things they couldn’t do in-house succeed. Cost-savings on the cloud comes over time as businesses get the hang of capacity management and scalability but the primary reason for cloud adoption should be to grow the business and enable newer revenue-generating opportunities. For example, News UK adopted cloud services with an aim to transform its IT and manage its paywall strategy. Its savings were a byproduct.  

  • Early adoption of cloud services… Or leaving it too late

Ironic as it may sound, if you are one of the earliest adopters of cloud, chances are that your cloud might be the earliest iteration and may not be as rich in features as the newer versions. It may even be more complex than current cloud services. For instance, there is a lot of technical difference between pre-OpenStack Rackspace cloud and its OpenStack version.

If you’ve left it too late, then your competitors are ahead of the curve and the other business stakeholders influence IT’s cloud buying decisions.

  • Biased towards one type of cloud

Hybrid IT is the way forward. Being too obsessed with private cloud services will lead to deeper vendor lock-in and adopting too much public cloud will lead to compliance and security issues. Enterprises must not develop a private cloud or a public cloud strategy but use cloud elements that best solves their problems. Take Betfair for instance, it uses a range of different cloud services. It uses AWS Redshift warehouse service for data analytics but uses VMware vCloud for automation and orchestration.

  • Relying heavily on reference architecture

Cloud services are meant to be unique to suit individual business needs. Replicating another organisation’s cloud strategies and infrastructure is likely to be less helpful.

  • Lack of skills and siloed approach

Cloud may indeed have entered mainstream computing but the success of cloud directly depends on the skills and experience of the team deploying it. Hiring engineers and cloud architects with experience on AWS to build private cloud may backfire. Experts have also called on enterprises to embrace DevOps and cut down the siloed approach to succeed in cloud. British Gas hired IT staff with the right skills for its Hive project built on the public cloud.

  • Viewing it as in-house datacentre infrastructure or traditional IT

Cloud calls for new ways of IT thinking. Just replacing internal infrastructure with cloud services but using the same IT strategies and policies to govern the cloud might result in cloud failure.

There may be other enterprise-related problems such as lack of budget or cultural challenges or legacy IT that may result in failed or stalled cloud project, but more often it is the strategy (or the lack of it) to blame than the technologies.


May 28, 2014  10:15 PM

My 10 minutes with Google’s datacentre VP

Archana Venkatraman Profile: Archana Venkatraman
Data Center, Google, Joe Kava, Machine learning, Monaco, PUE

Google's Joe Kava speaking at the Google EU Da...

Google’s Joe Kava speaking at the Google EU Data Center Summit (Photo credit: Tom Raftery)

At the Datacentres Europe 2014 conference in Monaco, I had a chance to not just hear Google’s datacentre VP Joe Kava deliver a keynote speech on how the search giant uses machine learning to achieve energy efficiency but also to speak to him individually for 10 minutes.

Here is my quick Q&A with him:

What can smaller datacentre operators learn from Google’s datacentres? There’s a feeling among many CIOs and IT teams that Google can afford to pump in millions into its facilities to keep them efficient.

Joe Kava: That attitude is not correct. In 2011, we published an exhaustive “how to” instruction set explaining how datacentres can be made more energy efficient without spending a lot of money. We can demonstrate it through our own use cases. Google’s network division, which is the size of a medium enterprise, had a technology refresh and by spending between $25,000 and $50,000 per site, we could improve their high availability features and improve their PUEs from 2.2 to 1.5. The savings were so high that it yielded a payback of the IT spend in just seven months. You show me a CIO who wouldn’t like a payback in seven months. 

Are there any factors, such as strict regulations, that are stifling the datacentre sector?

It is always better for an industry to regulate itself than have the government do it. It fosters innovation. There are many players in the industry that voluntarily regulate themselves in terms of data security and carbon emissions. One example is how since 2006, the industry has strongly rallied together behind the PUE metric and has taken energy efficiency tools quite to heart.

What impact is IoT having on datacentres?

Joe Kava: IoT (internet of things) is definitely having an impact on datacentres. As more volumes of data are created and as mass adoption of the cloud takes place, naturally it will require IT to think about datacentres and its efficiency differently. IoT brings huge sets of opportunities to datacentres.

What is your one piece of advice to CIOs?

You may think I am saying this because I am from Google but I strongly feel that most people that operate their datacentres shouldn’t be doing it. That’s not their core competency. Even if they do everything correctly and even if they have a big budget to build a resilient, highly efficient datacentre, they cannot compete in terms of the quick turnaround and the scalability that dedicated third-party providers can offer.

Tell us something about Google’s datacentres that we do not know

It is astounding to see what we can achieve in terms of efficiency with good old-fashioned testing and development and diligence. The datacentre team constantly questions the parameters and constantly pushes the boundaries to find newer ways to save money with efficiency. We design and build a lot of our own components and I am not just talking about servers and racks. We even design and build our own cooling infrastructure and develop our own components of the power architecture that goes into a facility.

It is a better way of doing things.

Are you building a new datacentre in Europe?

(Smiles broadly) We are always looking at expanding our facilities.

How do you feel about the revelations of the NSA surveillance project and how it has affected third-party datacentre users’ confidence?

It is a subject I feel very strongly from my heart but it is a question that I will let the press and policy team of Google handle.

Thank you Joe

Thank you!

 

Enhanced by Zemanta


May 21, 2014  12:45 PM

No such thing as absolute freedom from vendor lock-in, even in open source, proves Red Hat

Archana Venkatraman Profile: Archana Venkatraman
Cloud Computing, IBM, Inktank, Linux, Open source, OpenStack, Red Hat, Red Hat Enterprise Linux

OpenStack is a free, open source cloud computing platform giving users freedom from vendor lock-in. When it was alleged that Red Hat won’t support customers who use other versions of OpenStack cloud on its Linux operating systems, its president Paul Cormier passionately shared the company’s vision of open-source but steered clear from stating wholeheartedly that it WILL support its users no matter what version of OpenStack they use.

Any CIO worth his salt will admit that support services can be a deal-breaker when deciding to invest in technology.

Red Hat customers opt for the vendor’s commercial version of Linux (RHEL) over free Linux versions because they want to use its support services and make their IT enterprise-class. This has helped Red Hat build a $10bn empire around Linux and become the most dominant provider of commercial open source platform.

OpenStack

OpenStack (Photo credit: Wikipedia)

So when Cormier says — “Users are free to deploy Red Hat Enterprise Linux with any OpenStack offering, and there is no requirement to use our OpenStack technologies to get a Red Hat Enterprise Linux subscription. 

And separately, Our OpenStack offerings are 100% open source. In addition, we provide support for Red Hat Enterprise Linux OpenStack Platform,” — customers are likely to pick Red Hat’s OpenStack cloud on Red Hat operating system resulting in supplier lock-in.

Cormier justified: “Enterprise-class open source requires quality assurance. It requires standards. It requires security. OpenStack is no different. To cavalierly ‘compile and ship,’ untested OpenStack offerings would be reckless. It would not deliver open source products that are ready for mission critical operations and we would never put our customers in that position or at risk. 

Yes, Red Hat has to seek growth from its cloud offerings and as an open source leader, it has to protect the reputation of open cloud as being enterprise-ready.

Red Hat’s efforts in the open source industry are commendable. For instance, it acquired Ceph provider Inktank last month and said it will open source Inktank’s closed source monitoring offering.

But as the open sourced poster child, it also has the responsibility to contribute more to the spirit of open cloud and to invest more in Open source technology to give users absolute freedom to choose the cloud they like.

Competition among cloud providers is getting fiercer. To grab a larger share of the growing market, some cloud providers are slashing cloud costs while others are differentiating by offering managed services.  But snatching flexibility and freedom from cloud users is never a good idea.

But it will be unfair to single out Red Hat to open up its ecosystem. There’s HP, IBM, VMware and Oracle who are all part of the OpenStack project and who all have their versions of OpenStack cloud.

As Cormier says, We would celebrate and welcome competitors like HP showing commitment to true open source by open sourcing their entire software portfolio.”

Until then it’s a murky world. What open source? What open cloud? 

Enhanced by Zemanta


May 14, 2014  2:13 PM

Using cloud for test and development environments? Avoid this costly mistake

Archana Venkatraman Profile: Archana Venkatraman
AWS, Business, Cloud Computing, customer, E-commerce, Green computing, Scalability, Software testing

Using cloud services for application testing or software development is becoming a common practice because of cloud’s scalability, agility, ease of deployment and cost savings.

But some users are not yielding the cost saving benefit, and in some cases, even seeing cloud costs soar because of a simple error — they are not turning storage instances down when not in use.

Time and again purveyors of cloud computing have highlighted scalability as the hallmark of cloud computing and time and again users have listed the ability to scale the resources up and down as one of the biggest cost saving factors of the cloud.

But when discussing cloud costs and myths with a public cloud consultancy firm recently, I was shocked to learn that many enterprises that use the cloud for testing and development forget to scale down their testing environment at the end of the day and end up paying for idle IT resources – defeating the purpose of using cloud computing.

Building a test and dev lab in the cloud has its benefits – it saves the team time from building the entire environment from the ground up. Also, should the new software not work, they can launch another iteration quickly.  But the main benefit is the lower cost.

But delirious app testers and software developers may be leaving the instances running and pay for cloud storage for the hours of the night when no activity takes place on the infrastructure.

On the public cloud, turning down unused instances and capacity does not delete the testing environment. This means developers can simply scale the system up the next day to start from where they left.

But the practice of leaving programs running on the cloud is so common that cloud suppliers, management companies, and consultancies have all developed tools to help customers mitigate this waste.

For instance, AWS provides CloudWatch alarms which help customers set parameters on their instances so they automatically shut down if they are idle or underutilised.

Another tool it offers is AWS Trusted Advisor – available for free to customers on Business Level Support, or above. It looks at their account activity and actively shows them how they can save money by shutting down instances, buying Reserved Instances or moving to Spot Pricing.

“In 2013 alone, it generated more than a million recommendations for customers, helping customers realise over $207m in cost reductions,” AWS spokesman told me.

Cloud costs can be slashed by following good practices in capacity planning and resource-provisioning. But that’s at a strategic level while quick savings can be achieved by simple, common sense measures such as running instances only when necessary.

Perhaps, it is time to think of cloud resources as utilities – if you don’t leave the lights on when you leave work why should you leave idle instance running on the pay-as-you-operate cloud?

That’s $207m IT efficiency savings for customers of just one cloud provider. Imagine.

 

Enhanced by Zemanta


May 7, 2014  5:26 PM

AWS may be building a datacentre in Germany, but will the cloud data remain safe and private?

Archana Venkatraman Profile: Archana Venkatraman
AWS, Data Center, Germany, London, Microsoft

As public cloud provider AWS is looking to expand its datacentre footprint in Europe in the post-Prism world, it may have picked Germany because of the stricter regulations around data sovereignty. But the recent US court ruling asking Microsoft to hand over one customer’s email data held in its Dublin datacentre suggests that data on the cloud, regardless of where it is stored, may not be really private and secure.

While AWS has not clearly said it is building a datacentre in Germany, at its London Summit last week, Stephen Schmidt, its vice-president and chief information security officer told me that they are always looking to expand and that a Wall Street Journal article was “pretty explicit” about where their next datacentre might be?

The WSJ article quotes Andy Jassy, senior vice president naming Germany as its next datacentre location because of its “significant business in Germany” who could be demanding that their data resides within the country.

According to Chris Bunch from Cloudreach, a UK cloud consultancy firm that implements AWS clouds, AWS is growing so fast and have such market dominance that adding capcity for further growth is clearly sensible. AWS will have built one in the region within the next 12 months. 

Amazon already has three infrastructure facilities in Frankfurt, with seven others in London, Paris and Amsterdam. In addition to these ten Edge locations, it has three EC2 availability zones in Ireland, catering to EU customers.

But just as one would hail the potential AWS datacentre in Germany as a credible move to protect user data on the cloud, comes a US magistrate Court judgment ordering Microsoft to give the District Court access to the contents of one of its customer’s emails stored on a server located in Dublin. Microsoft challenged the decision but the judge disagreed and rejected its challenge.

Microsoft said: “The US government doesn’t have the power to search a home in another country, nor should it have the power to search the content of email stored overseas.”

“Microsoft’s argument is simple, perhaps deceptively so,” Judge Francis said in an official document, quashing Microsoft’s challenge.

“It has long been the law that a subpoena requires the recipient to produce information in its possession, custody, or control regardless of the location of that information,” he said.

Well, perhaps we still have a long way to go to see the rules of data sovereignty upheld, but with AWS’s growing customer portfolio, it will be good news to have public cloud data reside in Germany which has one of the strongest and toughest data regulations around the world.

Enhanced by Zemanta


Page 8 of 8« First...45678

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: