February 3, 2009 2:57 PM
Posted by: JoMaitland
Add new tag
, Amazon EC2
, Amazon Web Services
, cloud computing
An industry insider close to Amazon’s Web Services (AWS) business unit told us the company claims to have 400,000 customers using its web services offering.
AWS includes EC2, the compute-on-demand offering, S3, the hosted storage service, SimpleDB for hosted databases, Simple Queue Service (SQS) a communication channel for developers to store messages and CloudFront, which is a content delivery network.
Amazon has not publicly discussed much detail about its customers and how they are using AWS. For instance, of these 400,000 users, how many are using EC2 and S3, just S3 or just EC2? Is anyone using SimpleDB or CloudFront yet? How many of these users were one-time customers? My hunch is that 400,000 number includes any customer that has touched AWS regardless of whether they are still using it.
In conversations with IT users, it’s clear they are interested in these services, but need more reference cases on how to use it. A great success story goes a long way.
During a webinar on cloud computing today, James Staten, principal analyst at Forrester Research said enterprises need more transparency from EC2 to show that it can meet SLAs. “The predictability [of the service] is not good enough for business,” he said, noting that EC2 had two lengthy outages in 2008. Small businesses and gaming and entertainment companies are the biggest adopters of EC2, he said. The former can’t afford to build their own datacenters, while gaming and movie companies require extra infrastructure around the release of new games and movies, which can be setup and torn down as needed.
Staten said enterprises are using cloud services like EC2 for R&D projects, quick promotions, partner integration and colloboration and new ventures. He called for more companies to share how they are using these services and recommended that IT shops begin to experiment with it. Staten suggested endorsing one to two clouds as “IT approved” and establishing an internal policy for using these services. He urged IT organizations to let cloud providers know what you want and what’s more important to you? Secure enterprise links, standards, SLA expectations, levels of support (24/7 phone support, for example)? My guess would be all of the above. If you’d rather, I can hammer on the vendors, so let me know.
January 22, 2009 8:10 PM
Posted by: Bridget Botelho
, cloud computing
, Cloud computing security
, internal cloud computing
VMware, Inc. is on a mission to show companies that they can get the benefits of cloud computing without handing their mission critical applications over to an outside provider; with the upcoming Virtual Data Center-Operating System (VDC-OS), IT will be able to create secure, private cloud environments.
The yet to be released VDC-OS represents the evolution of the VMware Infrastructure; the platform, which is due for release sometime this year, will transform traditional data centers into internal cloud environments. The business case for creating an private cloud is less complexity in the data center; software like VDC-OS will virtualize and automate systems to the point that there is less ‘knob turning’ and more time spent on tasks that improve business, said VMware Sr. Director of Product Marketing, Bogomil Balkansky.
“Too much of IT budgets are spent on management tasks and keeping the lights on, instead of on tasks that actually improve business,” Balkansky said. “Infrastructure complexities should not get in the way of this, but they do.”
While external clouds like Amazon EC2 offer the same benefits of internal clouds, VMware is betting that large enterprises won’t send their mission critical applications outside the four walls of their data centers to these providers. Instead, they will want to create private cloud compute infrastructures using software like VDC-OS.
“There are security challenges with public clouds; enterprises don’t trust [outsiders] with their customer and financial data,” Balkansky said. “We want to transfer the notion of cloud computing to internal data center operations.”
VMware is also hosting a webinar on January 29 about Internal Cloud Computing, if you want to hear more on this.
Balkansky said private cloud computing environments will gain traction in large data centers, but that could just be a self-serving prophecy. After all, most public cloud providers won’t pay for VMware software and use free and open source Xen instead; hence, VMware has no place to go but within the enterprises that already know and love VMware.
While VMware is on an private cloud advocacy mission, as the largest virtualization provider on the planet, it can’t ignore the need to play well with public clouds. That’s where VMware’s vCloud initiative comes into play; it will eventually allow VMware users to move their virtual machines on demand between their datacenters and cloud service providers, and over 200 partners have signed up to support vCloud so far, Balkansky said.
January 19, 2009 7:53 PM
Posted by: JoMaitland
, cloud utilities
, optimized for the cloud
Hifn says its new Express DS4100 NIC card is “optimized for the cloud”. What’s next? Cables? Batteries? My desk?
The problem with the cloud is speed in terms of uploads and downloads, says Hifn’s PR person.
“Try uploading a terabyte to the cloud and see how long it takes.” He has a point there, but I think it takes more than a slick NIC to fix this problem.
The exact speeds and feeds of the DS4100 will not be available until the official release next week, but ballpark pricing will be $1000 per card. It also supports virtualization and service-orientated architectures, just in case you need the whole ball of yarn.
January 8, 2009 2:57 AM
Posted by: JoMaitland
, Sun Microsystems
, The451 Group
Sun Microsystems snapped up a key piece of cloud-enabling technology via its acquisition of Belgium-based Q-Layer this week, but it’s way ahead of most enterprise IT shops that are not ready for private clouds just yet.
Data from The451 Group, published in October, 2008, showed that 84% of their IT client base, several hundred large enterprises worldwide, have no plans to deploy internal, on-premise cloud computing.
Intergenia, a hosting company in Germany is the only public Q-Layer customer.
Q-Layer is focused on the orchestration layer above the hypervisor and supports VMware, Xen, Microsoft and Sun. Its NephOS software is designed to run on virtual and physical servers, storage and networks and abstracts all components in each layer through a uniform set of actions (E.g. create machine, reboot, backup, restore, start, stop, etc). The software translates these actions to the underlying physical or virtual technology. IT admins manage a virtual view that is automatically mapped to the underlying virtual or physical technology.
Other companies in this space include 3tera, Enomoly, Eucalyptus, DynamicOps, Arjuna and Cassatt, among others.
It sounds like great technology, which is typical of Sun, as is the timing. Sun’s track record of acquiring great technology and even building great technology way ahead of market adoption is second to none. This deal with Q-Layer looks to be in keeping with the technology focused company we know and love. Let’s hope IT shops are in a position to try this kit out sooner rather than later and Sun finally gets a break.
December 8, 2008 3:34 PM
Posted by: Bridget Botelho
, cloud computing
, Microsoft Azure
Gartner’s Vice President and distinguished analyst Tom Bittman spoke with us about the IT industry evolution led by virtualization and cloud computing, and why big players like VMware won’t be the virtualization software of choice.
Since virtualization is the foundation of cloud computing, clouds are the next logical step for virtualization vendors like VMware and Citrix Systems, but Bittman said if these vendors don’t make pricing changes, cloud platform providers like Google and Amazon won’t use them.
“Cloud computing is a wide open market, dominated by open source Xen. It is a market that is there for the taking, and for VMware that would require a significantly different pricing model,” said Bittman, who also blogs about virtualization and cloud computing. “Sun and Citrix could get a major foothold in the cloud market as well, if they get their act together.”
VMware has taken steps towards becoming cloud friendly with its Vcloud initiative, but Vcloud is limiting because the provider has to use VMware, Bittman said. Microsoft also has its own cloud service, Azure, supported by Hyper-V.
Microsoft will probably try to turn Azure into a platform for ISV’s to build software as a service, so “in a lot of ways, they are trying to build a platform for a cloud,” Bittman said. But, “there is no reason Windows will be a prominent player in the cloud…[because providers] like Amazon EC2 don’t care what the OS is; all they care about is what is being provided.”
The future of clouds; more providers, fewer OSes
Today, cloud computing is dominated by a small number of large providers, but in the years ahead there will probably be ecosystems built around those islands; Software as a Service (SAAS) built upon the existing clouds, and the sharing of resources between cloud providers, Bittman said. He also expects fragmentation from the few general cloud platforms of today into many specialty cloud providers with applications and infrastructures that cater to specific industries, like healthcare, which have specific compliance requirements, Bittman said.
“We will see a growth to thousands of cloud providers and they won’t want to write their own software using Xen; they will want to buy software and that is where companies like Sun could make a play,” Bittman said.
Cloud computing is also changing the game when it comes to operating systems; the concept of the Meta-OS (like VMware’s Virtual Data Center OS) is changing the paradigm of using one OS per physical server, Bittman said.”The old idea is you build one platform to manage one box, but if I have 10,000 boxes, I don’t want 10,000 OSEs managing everything independently,” Bittman said. “If I turn an OS into a dumb container, it can work in a much more distributed way, like Microsoft’s Azure, which is essentially Windows 2008 sprinkled all throughout the data center. This is changing the way we look at OSes going forward.”
Cloud computing has the power to change things in the IT industry because of what it offers companies; flexibility and agility, Bittman said.
“Most infrastructures today focus on cost, but we are beginning to see a focus shift towards agility. People are using [cloud environments] not because of the cost savings, but because it is flexible. The ability to make changes according to demand qickly is becoming a more important factor for data centers,” Bittman said.
December 4, 2008 3:17 PM
Posted by: Mark Fontecchio
, cloud computing
According to the economical side of Moore’s Law, processing power gets cheaper every year due to vendors being able to pack more of it in the same amount of space. Should cloud computing follow Moore’s Law?
Let’s take a look at Amazon’s computing offering in the cloud — the Elastic Compute Cloud (EC2). EC2 is a web service that lets customers rent Amazon servers on which they can host their own applications. There are different price levels called “instances.” The basic one is 10 cents an hour and, according to Amazon, is equivalent to a 32-bit system with 1.7 GB of memory, 1 EC2 Compute Unit, and 160 GB of instance storage.
So what is an EC2 Compute Unit? According to Amazon, it is equal to a 1.0-1.2 GHz 2007 Opteron or Xeon processor. When EC2 first came out in 2006, one EC2 Compute Unit was equivalent to an “early-2006 1.7 GHz Xeon processor” according to Amazon documentation on EC2 Instance Types.
(Two odd things about this: There’s a 20% difference between a 1.0 GHz processor and a 1.2 GHz processor, so what gives? And if a 1.0-1.2 GHz processor in 2007 is equivalent to a 1.7 GHz processor in 2006, why change the definition at all?)
The price has stayed the same since 2006 at 10 cents an hour for that basic instance, but you are getting the same amount of processing power now that you were in 2008. So it is not following the economic portion of Moore’s Law.
“You can now get a quad-core server for the same price you could get a single-core server in 2006. But cloud computing is not taking advantage of Moore’s Law,” said Raj Dutt, the CEO of Voxel Dot Net, a New York-based hosting company. “It’s the same price for the same amount of processing power.”
The question is whether it should. Clay Ryder, president of analyst firm Sageza Group, doesn’t necessarily think so. He sees EC2 and other cloud computing products to be based on a different pricing scheme. Whereas servers are based on a sales model that includes the cost of time, materials and markup, cloud computing is more of a values-based pricing model, and the two are not the same.
Ryder likened it to owning a car compared to renting or leasing. When you buy a car, you’re paying for the cost of materials to build it, the cost of labor it took to build (time), and any markup to make profit. When you rent one, you pay for a service.
“There is a lot of value in the Amazon approach,” he said. “You can turn it off and turn it on, and there’s no long-term costto you, and that is an intangible value.”
Ryder hits on something here. When you pay for EC2, you’re not just paying for the server hardware. You’re also paying for the data center infrastructure around it (land acquisition, building costs, power generation, chillers, racks, etc.) and the cost of labor it takes to maintain that infrastructure and the servers. If you have your own data center and your own people, you don’t pay for that when you go buy a dozen servers from Dell.
Some might still argue that at least a portion of the cost should be subject to Moore’s Law. After all, Amazon is charging the same price for the same amount of processing power, even though that processing power is getting cheaper for them to buy.
Then again, maybe Amazon has already factored in Moore’s Law, but has also factored in the increasing cost of labor, electricity, and materials to build a data center to run those servers. So in the end, it’s all a wash, and the price stays the same.
December 4, 2008 6:13 AM
Posted by: Rick Vanover
, Cloud storage
, Microsoft Azure
While the cloud is a new dimension of technology that IT managers and administrators will bake into the technology landscape, we have to make one fundamental decision: will the technology be in our out of our traditional data centers? This is a loaded question in regards to policy, security, compliance and a myriad of other categories that are very line-of-business specific.
One element than help give perspective for making this type of decision can be applied from basic management challenges. One important thing that I learned from working with various project managers is that in any engagement it is important to determine what you can manage and what you can (or cannot) control. When back-end components of the cloud reside outside of traditional internal data centers, we can manage the cloud — but not entirely control it. Part of this – identified in Lauren Horowitz’ post on this site – concerns the topics of transparency, service issues and cloud standards. When it comes down to it, if the back-end components of the cloud are outside the traditional data center, it cannot be fully controlled internally.
For offerings such as the Amazon EC2, Microsoft’s Azure, and Rackspace, these offerings are off site from internal data centers. With this parameter, decisions have to be made about what lies inside and outside traditional data centers. The alternative to the cloud back end being external, however, may not be as attractive and the time to market compared to a provided solution is inferior.
Building a cloud internally may be a daunting task for some organizations, especially when some of the more primary components may not already be in place. One mechanism that can truly enable an internal cloud is a virtualized server environment. In quantifying the virtual environment, it is not necessarily how many virtual machines or hosts are in use but the percentage of systems that are virtual machines. Along with that, another building block of a cloud is a storage grid for ultimate flexibility on data protection. Lastly, network capabilities are a pillar that defines the internal cloud. This can include the use of load-balancing and traffic-managing switches. With all of that, it becomes pretty clear that the costs and growing pains could be significant.
Make no mistake, there will be cloud computing success stories. But in the case of your own implementation — determining where the back-end cloud components reside will be a critical question that will need answering sooner than later.
November 20, 2008 10:41 PM
Posted by: JoMaitland
Acknowledging that a vendor paid for your hotel and travel does not excuse shameless reporting. It just makes it pointless. I’d rather read a creative Salesforce.com ad than a re-written press release. Writing for the New York Times, here’s what Bernard Lunn had to say about Salesforce.com after his free trip to Dreamforce, the company’s user show. His points are in bold, mine are underneath.
1. They are ambitious.
Really? Find me a tech company that isn’t.
2. They have a good shot at meeting this ambition.
If the crumbling U.S. economy and worldwide recession don’t hold Salesforce.com back, nothing will.
3. They are a marketing machine with flair.
How sweet. Has Mr. Lunn met EMC or Cisco or Oracle or any other tech company with marketing machines the size of small planets? There’s plenty of flair, trust me. That’s not the problem.
4. Their biggest issue is maybe price.
5. They see today’s troubled economy as their moment to win big.
As does every underdog.
6. Their vendor ecosystem is making money and acting bullish.
Interesting. Which companies, exactly, are making money and acting bullish today?
7. They believe good software design matters to the core economics of cloud computing.
Good software design matters to everyone. No business or consumer service survives long term without it.
8. They know how to partner with big companies to make themselves look bigger.
Wheeling out Amazon, Google and Facebook to conservative enterprise IT buyers probably makes them nervous, not comfortable. How many outages has Gmail had in the past month? How many outages has Amazon S3 had since it launched? Bigger is not necessarily smarter.
9. They have focused research and development.
Care to share the roadmap?
10. They will need to be careful about usability issues?
He mentions that the Software as a Service (SaaS) world is “lock-in resistent with low switching costs” meaning users can move to other providers easily. I am pretty sure he has this entirely the wrong way around. From my understanding, it’s actually extremely difficult to move customer relationship management data out of Safeforce.com and into another SaaS provider should you be unhappy with the company’s service.
November 20, 2008 4:13 PM
Posted by: JoMaitland
This week, Symantec Corp. launched a service that pushes configuration updates to Veritas Cluster Server and Veritas Storage Foundation users specific to that user’s environment. It’s called Veritas Operations Services and is supposedly “cloud based,” according to Sean Derrington, the director of storage management and availability at Symantec. But it’s more like Software as a Service than cloud computing. Here’s a snapshot of our Q&A with Derrington:
How does Symantec push out alerts to users?
Sean Derrington: When a customer creates an account through [the Veritas Installation Assessment Service at] VIAS.Symantec.com, they have the option to “opt in” for notifications. The alerts are sent via email and the specifics of the alert (e.g., maintenance pack, hot fix, new technical documentation) are included and unique to the customer’s server/storage environment. (For example, a Sun Solaris customer won’t receive IBM AIX updates.) Once the customer receives the notification, they can take the appropriate action at their discretion, timing, and incorporate it into their change-control processes.
How is this a “cloud” service other than in name?
S.D.: This is a cloud-based service, as it fundamentally alters how organizations can understand best practices, known supported configurations and identify hidden risks in their environment. The process would be as follows:
1. An IT organization visits VIAS.Symantec.com.
2. A user downloads an agentless data collector that gathers detailed server and storage configuration.
3. The user securely uploads the information to VIAS.Symantec.com (note: no application information or customer information is securely transmitted, simply configuration details).
4. A customized XML report on the server(s) that were analyzed is sent back to the customer, providing dynamic links to pertinent information regarding server and storage configurations.
5. This process can be repeated each time an organization is planning to go through a Veritas Storage Foundation and/or Veritas Cluster Server installation or upgrade and real time valid configuration information will be used for comparison.
Without this service, how did users go about patch management and so forth?
A: There are two ways that customers have historically been able to understand Veritas Storage Foundation and Veritas Cluster Server patch management.
1. Symantec sends an email notification based on customer’s license subscription (e.g., Veritas Storage Foundation for Oracle Real Application Clusters). This isn’t platform-specific, because the detailed configuration information isn’t known; only the type of license is.
2. Alternatively, customers can visit the Symantec support site, search for the software version and platform version in their environment and determine if there are valid patches that should be applied.
Can you offer real-world examples of how the service has improved things for users?
S.D.: Yes. Currently there are more than 500 customers using the Veritas Installation Assessment Service, and they have analyzed thousands of server and storage configurations. And of the servers that have been assessed, about 40% of the servers were found to have configuration errors. The top two invalid configurations (constituting about 70% of the total errors) were (1) storage subsystems that weren’t configured properly and (2) insufficient disk space. The insufficient disk space error is for the Veritas Storage Foundation and/or Veritas Cluster Server software, not application/database data capacity.
How much does the service cost?
S.D.: It’s $500 per physical server and is available now.