The Troposphere


December 4, 2008  6:13 AM

Cloud computing, are you in or out?

Rick Vanover Rick Vanover Profile: Rick Vanover
Exchange Server ActiveSync

While the cloud is a new dimension of technology that IT managers and administrators will bake into the technology landscape, we have to make one fundamental decision: will the technology be in our out of our traditional data centers? This is a loaded question in regards to policy, security, compliance and a myriad of other categories that are very line-of-business specific.

One element than help give perspective for making this type of decision can be applied from basic management challenges. One important thing that I learned from working with various project managers is that in any engagement it is important to determine what you can manage and what you can (or cannot) control. When back-end components of the cloud reside outside of traditional internal data centers, we can manage the cloud — but not entirely control it. Part of this — identified in Lauren Horowitz’ post on this site — concerns the topics of transparency, service issues and cloud standards. When it comes down to it, if the back-end components of the cloud are outside the traditional data center, it cannot be fully controlled internally.

For offerings such as the Amazon EC2, Microsoft’s Azure, and Rackspace, these offerings are off site from internal data centers. With this parameter, decisions have to be made about what lies inside and outside traditional data centers. The alternative to the cloud back end being external, however, may not be as attractive and the time to market compared to a provided solution is inferior.

Building a cloud internally may be a daunting task for some organizations, especially when some of the more primary components may not already be in place. One mechanism that can truly enable an internal cloud is a virtualized server environment. In quantifying the virtual environment, it is not necessarily how many virtual machines or hosts are in use but the percentage of systems that are virtual machines. Along with that, another building block of a cloud is a storage grid for ultimate flexibility on data protection. Lastly, network capabilities are a pillar that defines the internal cloud. This can include the use of load-balancing and traffic-managing switches. With all of that, it becomes pretty clear that the costs and growing pains could be significant.

Make no mistake, there will be cloud computing success stories. But in the case of your own implementation — determining where the back-end cloud components reside will be a critical question that will need answering sooner than later.

November 20, 2008  10:41 PM

10 plugs for Salesforce.com by the New York Times

JoMaitland Jo Maitland Profile: JoMaitland
Exchange Server ActiveSync

Acknowledging that a vendor paid for your hotel and travel does not excuse shameless reporting. It just makes it pointless. I’d rather read a creative Salesforce.com ad than a re-written press release. Writing for the New York Times, here’s what Bernard Lunn had to say about Salesforce.com after his free trip to Dreamforce, the company’s user show.  His points are in bold, mine are underneath.

1. They are ambitious.

Really? Find me a tech company that isn’t.

2. They have a good shot at meeting this ambition.

If the crumbling U.S. economy and worldwide recession don’t hold Salesforce.com back, nothing will.

3. They are a marketing machine with flair.

How sweet. Has Mr. Lunn met EMC or Cisco or Oracle or any other tech company with marketing machines the size of small planets? There’s plenty of flair, trust me. That’s not the problem.

4. Their biggest issue is maybe price.

Only maybe?

5. They see today’s troubled economy as their moment to win big.

As does every underdog.

6. Their vendor ecosystem is making money and acting bullish.

Interesting. Which companies, exactly, are making money and acting bullish today?

7. They believe good software design matters to the core economics of cloud computing.

Good software design matters to everyone. No business or consumer service survives long term without it.

8. They know how to partner with big companies to make themselves look bigger.

Wheeling out Amazon, Google and Facebook to conservative enterprise IT buyers probably makes them nervous, not comfortable. How many outages has Gmail had in the past month? How many outages has Amazon S3 had since it launched? Bigger is not necessarily smarter.

9. They have focused research and development.

Care to share the roadmap?

10. They will need to be careful about usability issues?

He mentions that the Software as a Service (SaaS) world is “lock-in resistent with low switching costs” meaning users can move to other providers easily.  I am pretty sure he has this entirely the wrong way around. From my understanding, it’s actually extremely difficult to move customer relationship management data out of Safeforce.com and into another SaaS provider should you be unhappy with the company’s service.


November 20, 2008  4:13 PM

Symantec ‘cloud’ service patchy

JoMaitland Jo Maitland Profile: JoMaitland

This week, Symantec Corp. launched a service that pushes configuration updates to Veritas Cluster Server and Veritas Storage Foundation users specific to that user’s environment. It’s called Veritas Operations Services and is supposedly “cloud based,” according to Sean Derrington, the director of storage management and availability at Symantec. But it’s more like Software as a Service than cloud computing. Here’s a snapshot of our Q&A with Derrington:

How does Symantec push out alerts to users?

Sean Derrington:  When a customer creates an account through [the Veritas Installation Assessment Service at] VIAS.Symantec.com, they have the option to “opt in” for notifications. The alerts are sent via email and the specifics of the alert (e.g., maintenance pack, hot fix, new technical documentation) are included and unique to the customer’s server/storage environment. (For example, a Sun Solaris customer won’t receive IBM AIX updates.) Once the customer receives the notification, they can take the appropriate action at their discretion, timing, and incorporate it into their change-control processes.

How is this a “cloud” service other than in name?

S.D.: This is a cloud-based service, as it fundamentally alters how organizations can understand best practices, known supported configurations and identify hidden risks in their environment. The process would be as follows:

1.      An IT organization visits VIAS.Symantec.com.

2.      A user downloads an agentless data collector that gathers detailed server and storage configuration.

3.      The user securely uploads the information to VIAS.Symantec.com (note: no application information or customer information is securely transmitted, simply configuration details).

4.      A customized XML report on the server(s) that were analyzed is sent back to the customer, providing dynamic links to pertinent information regarding server and storage configurations.

5.      This process can be repeated each time an organization is planning to go through a Veritas Storage Foundation and/or Veritas Cluster Server installation or upgrade and real time valid configuration information will be used for comparison.

Without this service, how did users go about patch management and so forth?

A:   There are two ways that customers have historically been able to understand Veritas Storage Foundation and Veritas Cluster Server patch management.

1.     Symantec sends an email notification based on customer’s license subscription (e.g., Veritas Storage Foundation for Oracle Real Application Clusters). This isn’t platform-specific, because the detailed configuration information isn’t known; only the type of license is.

2.           Alternatively, customers can visit the Symantec support site, search for the software version and platform version in their environment and determine if there are valid patches that should be applied.

Can you offer real-world examples of how the service has improved things for users?

S.D.: Yes. Currently there are more than 500 customers using the Veritas Installation Assessment Service, and they have analyzed thousands of server and storage configurations.  And of the servers that have been assessed, about 40% of the servers were found to have configuration errors. The top two invalid configurations (constituting about 70% of the total errors) were (1) storage subsystems that weren’t configured properly and (2) insufficient disk space. The insufficient disk space error is for the Veritas Storage Foundation and/or Veritas Cluster Server software, not application/database data capacity.

How much does the service cost?

S.D.:  It’s $500 per physical server and is available now. 


November 12, 2008  10:13 PM

Cloud tidbits from the 451 Group Client Conference

JoMaitland Jo Maitland Profile: JoMaitland
Exchange Server ActiveSync

Duncan Johnston-Watt, the CEO of stealthy upstart Cloudsoft Corp. gave the first presentation on his new company at the 451 Group’s Client Conference in Boston today.

“I like to say you’re not doing more with less; you’re doing nothing,” he said, speaking of the shift to cloud computing. “You will own less and less of the infrastructure and care only about what business services you need.”

Cloudsoft is building a product that uses patented mediated routing IP from Enigmatec Corp, Johnson-Watt’s previous company. Called MarketMaker, the product will provide mediation as a service for service providers looking to get into the online auction, betting or brokerage and bookings businesses. “You want to be able to move mediation from one cloud to another to forefill orders  …  to move order books around to provide a guaranteed performance level and predictable behavior,” he said.

The name of the company, which implies it intends to be the Microsoft of the cloud, is no accident. “What is the Microsoft Office of the cloud … what are the essential business services you need from the cloud?” Johnston-Watt said. That’s where he believes cloud computing will get interesting.

Meanwhile, keeping it real, William Fellows, principal analyst and co-founder of the 451 Group  said cultural and organizational issues concerning power, trust, control and ownership are the biggest barriers to adoption of cloud services by enterprise IT. He also believes that the contractual language is not there yet for service-level agreements that meet compliance regulations. But Fellows insisted that  IT should not dismiss the trend. “Just understand what it’s good for.”


November 10, 2008  5:12 PM

EMC takes wraps off Atmos cloud plans

JoMaitland Jo Maitland Profile: JoMaitland

EMC Corp. says that it has a handful of Web 2.0 service providers using its new Atmos cloud-optimized storage (COS) product, but none that were ready to discuss it today. So for now, Atmos is an interesting technology announcement waiting for a reality check from customers.

And while EMC is focused on selling this to service providers initially, it does believe there’s an enterprise play down the line for media and entertainment, life sciences, and oil and gas companies interested in building private clouds. Somewhat confusingly, EMC also hinted at its plans eventually to become a service provider itself, which may cause some channel tension, but for now Atmos is a product only.

Here’s a taste of what EMC claims it will do. Atmos is a globally distributed file system (code-named Maui) that runs on purpose-built EMC hardware (code-named Hulk).

The software automatically distributes data, placing it on nodes across a network according to user-defined policies. These policies dictate what level of replication, versioning, compression, deduplication and disk drive spin-down a particular piece of data should have as it resides in the cloud. Depending on how important the information is, there might one, five, or 10 copies of it around the world, for example.

The closest thing out there today that resembles Atmos is Cleversafe.org.

Atmos also provides Web service application programming interfaces, including Represntational State Transfer and Simple Object Access Protocol, as well as the Common Internet File System and Network File System support for integration with file services; a unified name space, browser-based admin tools and multitenant support for multiple applications to be served from the same infrastructure without co-mingling data. And Simple Network Management Protocol support provides a plugin to existing reporting tools on top of the existing reports and alerts Atmos offers, according to EMC.

The software ships on purpose-built hardware available in 120TB, 240TB or 360TB configurations. [Editor’s note: The National Center for Atmospheric Research has an archive already several petabytes in size. It would need at least three of these boxes to contain just its existing data. In other words 360 TB is large, but not that large by today’s standards].

There’s also a fit with VMware as Atmos can run on a VMware image, although Mike Feinberg, the senior VP of the cloud infrastructure group at EMC, says users don’t need VMware to use Atmos.

EMC did not announce pricing details today either, except to say that it’ll be competitive will existing petabyte-scale JBOD-type offerings.


November 5, 2008  10:01 PM

Cloud computing allowed you to read an 1851 New York Times article online

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Nicholas Carr recounts the story of the New York Times trying to get its archives all online, and using cloud computing to do it.

The short version: NYT scanned all their articles in, resulting in four terabytes worth of TIFF files. They wanted to convert them all the PDFs but weren’t capable of doing it inhouse. So a software programmer at NYT sent them all to Amazon’s Simple Storage System, created some code with Amazon’s Elastic Compute Cloud to convert them to PDFs, and voila, a day later it was done.

The total cost for the computing job? Gottfrid told me that the entire EC2 bill came to $240. (That’s 10 cents per computer-hour times 100 computers times 24 hours; there were no bandwidth charges since all the data transfers took place within Amazon’s system – from S3 to EC2 and back.)

If it wasn’t for the cloud, Gottfrid told me, the Times may well have abandoned the effort. Doing the conversion would have either taken a whole lot of time or a whole lot of money, and it would have been a big pain in the ass. With the cloud, though, it was fast, easy, and cheap, and it only required a single employee to pull it off. “The self-service nature of EC2 is incredibly powerful,” says Gottfrid. “It is often taken for granted but it is a real democratizing force in lowering the barriers.”

Which brings Carr to his main point: Cloud computing will be important for what it will be able to do that already can’t be done. Up to now, most people are focusing on how to transfer their current IT infrastructure into the cloud. But cloud computing will make its mark by opening up avenues that were previously closed, or not even built yet.

But as one commenter stated, moving existing infrastructure is going to naturally be the first focus, as enterprises are worried about their current infrastructure, and not necessarily new tasks that the cloud could tackle. It will take a more long-term visionary within the company (such as the chief technology officer) to figure out which new trenches to build.


November 3, 2008  2:54 PM

Rackable Systems CloudRack designed for cloud computing

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Fremont, Calif.-based Rackable Systems, Inc. is catering to cloud computing environments with a new server rack designed specifically for cloud environments called Systems CloudRack, the company announced October 30.

This new product from Rackable is one of many that we are seeing from vendors who are trying to design new equipment or re-purpose existing equipment for cloud computing environments, which are characterized by a CloudRacklarge number of server nodes in scalable data centers providing SaaS (Software-as-a-Service) to users.

According to Saeed Atashie, director of server products at Rackable, CloudRack was created with the density and power efficiency cloud environments demand.

CloudRack is 44U cabinet that supports up to 88 servers, 176 processors from either AMD or Intel, 704 cores, 352 TBs of storage and up to 8x 3.5” drives/board (4 drives/CPU). It is designed to be power efficient, and easy to service, according to Rackable.
“CloudRack is designed from the ground-up with cloud customers needs and buying behavior in mind,” Atashie said. “In comparison, a number of our competitors design for a general purpose (one size fits all) server market and then try to position these products in the cloud computing market.”

Rackable also announced servers for HPC and cloud environments back in June, the  XE2208, with twice the density of existing Rackable Systems servers.  Rackable is focusing products on the cloud computing market because it is “the latest industry mega-trend,” Atashie said. Other companies focusing products on the cloud include IBM, VMware, HP and Intel.

Atashie said Rackable already has customers lined up for the CloudRack, but would not disclose any names. In general, CloudRack will appeal to companies using cloud computing or those using high performance computing, the company reported.

The Rackable Systems CloudRack CR1000 model can be built to order. More information about specific configurations, pricing or Rackable Systems’ build-to-order model is available on Rackable Systems’ website.


October 29, 2008  6:12 PM

Cloud storage: What a difference a decade makes

Alex Barrett Alex Barrett Profile: Alex Barrett

When it comes to all the varieties of cloud services out there, cloud storage gets a lot of love from hosting providers such as RackSpace and the Planet, which have both made cloud-related storage moves of late.

But the skeptic in me wonders why hosting providers think that cloud storage will succeed when storage service providers (SSPs) of the late 1990s were such a blatant failure? I’m talking about companies like the dearly departed StorageNetworks, which rose to IPO stardom in 2000, only to shutter its doors two years later.

For one thing, said Rob Walters, the Planet’s general manager for data protection and storage, there’s a big difference between the storage used by SSPs of yore and today’s cloud providers. “The old SSPs used hardware like the EMC Symmetrix, the economics of which just didn’t work out,” he said. Cloud storage providers, on the other hand, rely heavily on taking commercial off-the-shelf (COTS) hardware and replicating it ad nauseum to get decent reliability and performance.

To that end, the Planet struck a deal last month with Nirvanix, a cloud storage provider that has written its own distributed “Storage Delivery Network” (SDN) and cloud-based virtual storage gateway, Nirvanix CloudNAS that runs on commodity Dell hardware. As part of the deal, The Planet customers can tap in to Nirvanix storage resources, and The Planet will act as one of the replicated nodes in Nirvanix’s geographically distributed SDN.

People are also looking to store data today that has different performance needs than what SSPs proposed to house, said Urvish Vashi, general manager for The Planet’s Dedicated Hosting, namely backup and archive data, plus Web 2.0 data like photographs and streaming video files. With these data types, “I/O to the disk isn’t the limiting factor, it’s I/O to the network.” In other words, for these files, it doesn’t matter if you store this data on a dog of a slow drive because access to it is limited by an even slower network.

And then, there’s the fact that things are just different now. Whereas 10 years ago public dialogue centered on security and privacy, people nowadays publish and expose every detail of their lives on blogs or sites like MySpace and FaceBook. Taking that idea one step further, the idea of hosting data on shared infrastructure just doesn’t phase companies the way it used to, Vashi said. “It’s less of an unusual choice than it used to be.”

I’m still skeptical, but willing to suspend disbelief.


October 27, 2008  10:28 PM

Microsoft launches Azure for cloud computing

Leah Rosin Leah Rosin Profile: Leah Rosin

Following on the heels of an IDC report predicting that cloud computing will capture IT spending growth over the next five years, another major player came to the cloud game on Monday. During a keynote speech at the Microsoft Professional Developers Conference 2008 (PDC2008), Ray Ozzie, Microsoft Corp.’s chief software architect, announced Windows Azure, the cloud-based service foundation underlying its Azure Services Platform.

The Azure platform combines cloud-based developer capabilities with storage, computational and networking infrastructure services hosted by Microsoft’s global datacenter network, providing developers the ability to deploy applications in the cloud or on-premises and enabling experiences across a range of business and consumer scenarios. A limited community technology preview (CTP) of the Azure Services Platform was initially made available to PDC2008 attendees.

“Today marks a turning point for Microsoft and the development community,” Ozzie said. “We have introduced a game-changing set of technologies that will bring new opportunities to Web developers and business developers alike. The Azure Services Platform, built from the ground up to be consistent with Microsoft’s commitment to openness and interoperability, promises to transform the way businesses operate and how consumers access their information and experience the Web. Most important, it gives our customers the power of choice to deploy applications in cloud-based Internet services or through on-premises servers, or to combine them in any way that makes the most sense for the needs of their business.”

The key components of Azure are summarized:

• Windows Azure for service hosting and management, low-level scalable storage, computation and networking

• Microsoft SQL Services for database services and reporting

• Microsoft .NET Services that are service-based implementations of familiar .NET Framework concepts such as workflow and access control

• Live Services for a consistent way for users to store, share and synchronize documents, photos, files and information across their PCs, phones, PC applications and websites

• Microsoft SharePoint Services and Microsoft Dynamics CRM Services for business content, collaboration and rapid solution development in the cloud

Nicolas Carr shared some of the nitty-gritty details:

During its preview stage, Windows Azure will be available for free to developers. Once the platform launches commercially – and, according to Ozzie, Microsoft will be “intentionally conservative” in rolling out the full platform – pricing will be based on a user’s actual consumption of CPU time (per hour), bandwidth (per gigabyte), storage (per gigabyte) and transactions. The actual fee structure has not been released, though Ozzie says it will be “competitive with the marketplace” and will vary based on different available service levels.

Now, it’s not horribly shocking that Microsoft has joined the movement to the cloud. But it’s a bit amusing because a lot of the cloud effort has been generated by those anti-Windows programmers, looking to share applications that directly compete with the Microsoft product suite. As I read through David Chappell’s Azure white paper, I couldn’t help but chuckle when I read this: “The Windows Azure compute service is based, of course, on Windows.”


October 22, 2008  9:18 PM

Rackspace: From managed hosting to cloud hosting

Alex Barrett Alex Barrett Profile: Alex Barrett
Exchange Server ActiveSync

In an effort to wrap my mind around this cloud computing stuff, I watched the webcast of Rackspace’s cloud computing launch today, where the company laid out its plans to move from simple managed hosting provider to cloud provider extraordinaire, taking on Amazon Elastic Compute Cloud, or EC2, and Simple Storage Service, or S3, in the process.

Rackspace’s plan centers on acquisition, partnership and expanding its existing Mosso Web hosting product into three broad offerings: Cloud Sites website hosting, Cloud Files storage service, and Cloud Servers virtual private servers.

On the acquisition side, RackSpace has acquired Jungle Disk, a cloud-based desktop storage and backup provider that has thus far relied on Amazon’s S3. It also acquired Slicehost, a provider of Xen-based virtual private servers (VPSs) that claims 11,000 customers and 15,000 virtual servers.

As far as new Mosso offerings, the new Cloud Files will come in at $0.15 per GB of replicated data, or if the data is distributed across a content delivery network (CDN), at $0.22 per GB. CDN capabilities come by way of a partnership with Limelight Inc.

Also as part of Cloud Files, RackSpace will partner with Sonian Networks to provide cloud-based email archiving starting at $3/mailbox.

Coming soon, Cloud Servers is Mosso’s new name for Slicehost’s VPS offering. Under Slicehost, the services starts at $20/month for a virtual Xen server with 256GB of RAM, 10GB of storage, and 100GB of bandwidth. “Slices” scale to 15.5GB of RAM, 620GB of storage and 2,000GB of bandwidth for $800/month.

When it comes to the Xen-based Slicehost — aka Cloud Servers — I should note that Mosso is a longtime VMware customer that has publicly pondered the viability of the relationship as it expands its services. It will be interesting to see whether this acquisition signals a break from VMware or whether it will continue to use VMware as the underpinning of its Cloud Sites offering. Rackspace, care to comment?

On another note, Slicehost is one of many hosting providers that use open source Xen as the basis of their cloud offerings. Presumably, it’s also the kind of company to which Simon Crosby, CTO of Citrix Systems Inc., referred when Citrix announced XenServer Cloud Edition and Citrix Cloud Center (C3) at VMworld 2008.

At the time, Crosby said that luring these hosting providers into Citrix support contracts was a huge priority. “Trivially, we looked around and found a couple hundred hosted IT infrastructure providers using open source Xen,” he said. “XenServer Cloud Edition is intended to win greenfield accounts but also to bring the open source Xen guys back home.” XenServer Cloud Edition boasts features like the ability to run Windows guests and commercial support.

One final thought: If any of you find this whole cloud computing thing a bit, ahem, nebulous, Lew Moorman, Rackspace’s chief strategy officer, made an interesting distinction between different types of cloud offerings. “Cloud apps,” Moorman said, are what we used to think of as Software as a Service (SaaS); “cloud hosting,” meanwhile, refers to pooled external compute resources. And of course, there’s cloud storage. Rackspace, it seems, will offer all three.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: