The Troposphere


January 19, 2009  7:53 PM

Cloud hype extends to NIC cards

JoMaitland Jo Maitland Profile: JoMaitland

Hifn says its new Express DS4100 NIC card is “optimized for the cloud”. What’s next? Cables? Batteries? My desk?

The problem with the cloud is speed in terms of uploads and downloads, says Hifn’s PR person.
“Try uploading a terabyte to the cloud and see how long it takes.” He has a point there, but I think it takes more than a slick NIC to fix this problem.

The exact speeds and feeds of the DS4100 will not be available until the official release next week, but ballpark pricing will be $1000 per card. It also supports virtualization and service-orientated architectures, just in case you need the whole ball of yarn.

January 8, 2009  2:57 AM

Sun buys Q-Layer a tad early

JoMaitland Jo Maitland Profile: JoMaitland

Sun Microsystems snapped up a key piece of cloud-enabling technology via its acquisition of Belgium-based Q-Layer this week, but it’s way ahead of most enterprise IT shops that are not ready for private clouds just yet.

Data from The451 Group, published in October, 2008, showed that 84% of their IT client base, several hundred large enterprises worldwide, have no plans to deploy internal, on-premise cloud computing.

Intergenia, a hosting company in Germany is the only public Q-Layer customer.

Q-Layer is focused on the orchestration layer above the hypervisor and supports VMware, Xen, Microsoft and Sun. Its NephOS software is designed to run on virtual and physical servers, storage and networks and abstracts all components in each layer through a uniform set of actions (E.g. create machine, reboot, backup, restore, start, stop, etc). The software translates these actions to the underlying physical or virtual technology. IT admins manage a virtual view that is automatically mapped to the underlying virtual or physical technology.

Other companies in this space include 3tera, Enomoly, Eucalyptus, DynamicOps, Arjuna and Cassatt, among others.

It sounds like great technology, which is typical of Sun, as is the timing. Sun’s track record of acquiring great technology and even building great technology way ahead of market adoption is second to none. This deal with Q-Layer looks to be in keeping with the technology focused company we know and love. Let’s hope IT shops are in a position to try this kit out sooner rather than later and Sun finally gets a break.


December 8, 2008  3:34 PM

Gartner VP predicts thousands of clouds

Bridget Botelho Profile: Bridget Botelho

Gartner’s Vice President and distinguished analyst Tom Bittman spoke with us about the IT industry evolution led by virtualization and cloud computing, and why big players like VMware won’t be the virtualization software of choice.

Since virtualization is the foundation of cloud computing, clouds are the next logical step for virtualization vendors like VMware and Citrix Systems, but Bittman said if these vendors don’t make pricing changes, cloud platform providers like Google and Amazon won’t use them.

“Cloud computing is a wide open market, dominated by open source Xen. It is a market that is there for the taking, and for VMware that would require a significantly different pricing model,” said Bittman, who also blogs about virtualization and cloud computing. “Sun and Citrix could get a major foothold in the cloud market as well, if they get their act together.”

VMware has taken steps towards becoming cloud friendly with its Vcloud initiative, but Vcloud is limiting because the provider has to use VMware, Bittman said. Microsoft also has its own cloud service, Azure, supported by Hyper-V.

Microsoft will probably try to turn Azure into a platform for ISV’s to build software as a service, so “in a lot of ways, they are trying to build a platform for a cloud,” Bittman said. But, “there is no reason Windows will be a prominent player in the cloud…[because providers] like Amazon EC2 don’t care what the OS is; all they care about is what is being provided.”

The future of clouds; more providers, fewer OSes

Today, cloud computing is dominated by a small number of large providers, but in the years ahead there will probably be ecosystems built around those islands; Software as a Service (SAAS) built upon the existing clouds, and the sharing of resources between cloud providers, Bittman said. He also expects fragmentation from the few general cloud platforms of today into many specialty cloud providers with applications and infrastructures that cater to specific industries, like healthcare, which have specific compliance requirements, Bittman said.

“We will see a growth to thousands of cloud providers and they won’t want to write their own software using Xen; they will want to buy software and that is where companies like Sun could make a play,” Bittman said.

Cloud computing is also changing the game when it comes to operating systems; the concept of the Meta-OS (like VMware’s Virtual Data Center OS) is changing the paradigm of using one OS per physical server, Bittman said.”The old idea is you build one platform to manage one box, but if I have 10,000 boxes, I don’t want 10,000 OSEs managing everything independently,” Bittman said. “If I turn an OS into a dumb container, it can work in a much more distributed way, like Microsoft’s Azure, which is essentially Windows 2008 sprinkled all throughout the data center. This is changing the way we look at OSes going forward.”

Cloud computing has the power to change things in the IT industry because of what it offers companies; flexibility and agility, Bittman said.

“Most infrastructures today focus on cost, but we are beginning to see a focus shift towards agility. People are using [cloud environments] not because of the cost savings, but because it is flexible. The ability to make changes according to demand qickly is becoming a more important factor for data centers,” Bittman said.


December 4, 2008  3:17 PM

Should Amazon EC2 follow Moore’s Law?

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio
Exchange Server ActiveSync

According to the economical side of Moore’s Law, processing power gets cheaper every year due to vendors being able to pack more of it in the same amount of space. Should cloud computing follow Moore’s Law?

Let’s take a look at Amazon’s computing offering in the cloud — the Elastic Compute Cloud (EC2). EC2 is a web service that lets customers rent Amazon servers on which they can host their own applications. There are different price levels called “instances.” The basic one is 10 cents an hour and, according to Amazon, is equivalent to a 32-bit system with 1.7 GB of memory,  1 EC2 Compute Unit, and 160 GB of instance storage.

So what is an EC2 Compute Unit? According to Amazon, it is equal to a 1.0-1.2 GHz 2007 Opteron or Xeon processor. When EC2 first came out in 2006, one EC2 Compute Unit was equivalent to an “early-2006 1.7 GHz Xeon processor” according to Amazon documentation on EC2 Instance Types.

(Two odd things about this: There’s a 20% difference between a 1.0 GHz processor and a 1.2 GHz processor, so what gives? And if a 1.0-1.2 GHz processor in 2007 is equivalent to a 1.7 GHz processor in 2006, why change the definition at all?)

The price has stayed the same since 2006 at 10 cents an hour for that basic instance, but you are getting the same amount of processing power now that you were in 2008. So it is not following the economic portion of Moore’s Law.

“You can now get a quad-core server for the same price you could get a single-core server in 2006. But cloud computing is not taking advantage of Moore’s Law,” said Raj Dutt, the CEO of Voxel Dot Net, a New York-based hosting company. “It’s the same price for the same amount of processing power.”

The question is whether it should. Clay Ryder, president of analyst firm Sageza Group, doesn’t necessarily think so. He sees EC2 and other cloud computing products to be based on a different pricing scheme. Whereas servers are based on a sales model that includes the cost of time, materials and markup, cloud computing is more of a values-based pricing model, and the two are not the same.

Ryder likened it to owning a car compared to renting or leasing. When you buy a car, you’re paying for the cost of materials to build it, the cost of labor it took to build (time), and any markup to make profit. When you rent one, you pay for a service.

“There is a lot of value in the Amazon approach,” he said. “You can turn it off and turn it on, and there’s no long-term costto you, and that is an intangible value.”

Ryder hits on something here. When you pay for EC2, you’re not just paying for the server hardware. You’re also paying for the data center infrastructure around it (land acquisition, building costs, power generation, chillers, racks, etc.) and the cost of labor it takes to maintain that infrastructure and the servers. If you have your own data center and your own people, you don’t pay for that when you go buy a dozen servers from Dell.

Some might still argue that at least a portion of the cost should be subject to Moore’s Law. After all, Amazon is charging the same price for the same amount of processing power, even though that processing power is getting cheaper for them to buy.

Then again, maybe Amazon has already factored in Moore’s Law, but has also factored in the increasing cost of labor, electricity, and materials to build a data center to run those servers. So in the end, it’s all a wash, and the price stays the same.


December 4, 2008  6:13 AM

Cloud computing, are you in or out?

Rick Vanover Rick Vanover Profile: Rick Vanover
Exchange Server ActiveSync

While the cloud is a new dimension of technology that IT managers and administrators will bake into the technology landscape, we have to make one fundamental decision: will the technology be in our out of our traditional data centers? This is a loaded question in regards to policy, security, compliance and a myriad of other categories that are very line-of-business specific.

One element than help give perspective for making this type of decision can be applied from basic management challenges. One important thing that I learned from working with various project managers is that in any engagement it is important to determine what you can manage and what you can (or cannot) control. When back-end components of the cloud reside outside of traditional internal data centers, we can manage the cloud — but not entirely control it. Part of this — identified in Lauren Horowitz’ post on this site — concerns the topics of transparency, service issues and cloud standards. When it comes down to it, if the back-end components of the cloud are outside the traditional data center, it cannot be fully controlled internally.

For offerings such as the Amazon EC2, Microsoft’s Azure, and Rackspace, these offerings are off site from internal data centers. With this parameter, decisions have to be made about what lies inside and outside traditional data centers. The alternative to the cloud back end being external, however, may not be as attractive and the time to market compared to a provided solution is inferior.

Building a cloud internally may be a daunting task for some organizations, especially when some of the more primary components may not already be in place. One mechanism that can truly enable an internal cloud is a virtualized server environment. In quantifying the virtual environment, it is not necessarily how many virtual machines or hosts are in use but the percentage of systems that are virtual machines. Along with that, another building block of a cloud is a storage grid for ultimate flexibility on data protection. Lastly, network capabilities are a pillar that defines the internal cloud. This can include the use of load-balancing and traffic-managing switches. With all of that, it becomes pretty clear that the costs and growing pains could be significant.

Make no mistake, there will be cloud computing success stories. But in the case of your own implementation — determining where the back-end cloud components reside will be a critical question that will need answering sooner than later.


November 20, 2008  10:41 PM

10 plugs for Salesforce.com by the New York Times

JoMaitland Jo Maitland Profile: JoMaitland
Exchange Server ActiveSync

Acknowledging that a vendor paid for your hotel and travel does not excuse shameless reporting. It just makes it pointless. I’d rather read a creative Salesforce.com ad than a re-written press release. Writing for the New York Times, here’s what Bernard Lunn had to say about Salesforce.com after his free trip to Dreamforce, the company’s user show.  His points are in bold, mine are underneath.

1. They are ambitious.

Really? Find me a tech company that isn’t.

2. They have a good shot at meeting this ambition.

If the crumbling U.S. economy and worldwide recession don’t hold Salesforce.com back, nothing will.

3. They are a marketing machine with flair.

How sweet. Has Mr. Lunn met EMC or Cisco or Oracle or any other tech company with marketing machines the size of small planets? There’s plenty of flair, trust me. That’s not the problem.

4. Their biggest issue is maybe price.

Only maybe?

5. They see today’s troubled economy as their moment to win big.

As does every underdog.

6. Their vendor ecosystem is making money and acting bullish.

Interesting. Which companies, exactly, are making money and acting bullish today?

7. They believe good software design matters to the core economics of cloud computing.

Good software design matters to everyone. No business or consumer service survives long term without it.

8. They know how to partner with big companies to make themselves look bigger.

Wheeling out Amazon, Google and Facebook to conservative enterprise IT buyers probably makes them nervous, not comfortable. How many outages has Gmail had in the past month? How many outages has Amazon S3 had since it launched? Bigger is not necessarily smarter.

9. They have focused research and development.

Care to share the roadmap?

10. They will need to be careful about usability issues?

He mentions that the Software as a Service (SaaS) world is “lock-in resistent with low switching costs” meaning users can move to other providers easily.  I am pretty sure he has this entirely the wrong way around. From my understanding, it’s actually extremely difficult to move customer relationship management data out of Safeforce.com and into another SaaS provider should you be unhappy with the company’s service.


November 20, 2008  4:13 PM

Symantec ‘cloud’ service patchy

JoMaitland Jo Maitland Profile: JoMaitland

This week, Symantec Corp. launched a service that pushes configuration updates to Veritas Cluster Server and Veritas Storage Foundation users specific to that user’s environment. It’s called Veritas Operations Services and is supposedly “cloud based,” according to Sean Derrington, the director of storage management and availability at Symantec. But it’s more like Software as a Service than cloud computing. Here’s a snapshot of our Q&A with Derrington:

How does Symantec push out alerts to users?

Sean Derrington:  When a customer creates an account through [the Veritas Installation Assessment Service at] VIAS.Symantec.com, they have the option to “opt in” for notifications. The alerts are sent via email and the specifics of the alert (e.g., maintenance pack, hot fix, new technical documentation) are included and unique to the customer’s server/storage environment. (For example, a Sun Solaris customer won’t receive IBM AIX updates.) Once the customer receives the notification, they can take the appropriate action at their discretion, timing, and incorporate it into their change-control processes.

How is this a “cloud” service other than in name?

S.D.: This is a cloud-based service, as it fundamentally alters how organizations can understand best practices, known supported configurations and identify hidden risks in their environment. The process would be as follows:

1.      An IT organization visits VIAS.Symantec.com.

2.      A user downloads an agentless data collector that gathers detailed server and storage configuration.

3.      The user securely uploads the information to VIAS.Symantec.com (note: no application information or customer information is securely transmitted, simply configuration details).

4.      A customized XML report on the server(s) that were analyzed is sent back to the customer, providing dynamic links to pertinent information regarding server and storage configurations.

5.      This process can be repeated each time an organization is planning to go through a Veritas Storage Foundation and/or Veritas Cluster Server installation or upgrade and real time valid configuration information will be used for comparison.

Without this service, how did users go about patch management and so forth?

A:   There are two ways that customers have historically been able to understand Veritas Storage Foundation and Veritas Cluster Server patch management.

1.     Symantec sends an email notification based on customer’s license subscription (e.g., Veritas Storage Foundation for Oracle Real Application Clusters). This isn’t platform-specific, because the detailed configuration information isn’t known; only the type of license is.

2.           Alternatively, customers can visit the Symantec support site, search for the software version and platform version in their environment and determine if there are valid patches that should be applied.

Can you offer real-world examples of how the service has improved things for users?

S.D.: Yes. Currently there are more than 500 customers using the Veritas Installation Assessment Service, and they have analyzed thousands of server and storage configurations.  And of the servers that have been assessed, about 40% of the servers were found to have configuration errors. The top two invalid configurations (constituting about 70% of the total errors) were (1) storage subsystems that weren’t configured properly and (2) insufficient disk space. The insufficient disk space error is for the Veritas Storage Foundation and/or Veritas Cluster Server software, not application/database data capacity.

How much does the service cost?

S.D.:  It’s $500 per physical server and is available now. 


November 12, 2008  10:13 PM

Cloud tidbits from the 451 Group Client Conference

JoMaitland Jo Maitland Profile: JoMaitland
Exchange Server ActiveSync

Duncan Johnston-Watt, the CEO of stealthy upstart Cloudsoft Corp. gave the first presentation on his new company at the 451 Group’s Client Conference in Boston today.

“I like to say you’re not doing more with less; you’re doing nothing,” he said, speaking of the shift to cloud computing. “You will own less and less of the infrastructure and care only about what business services you need.”

Cloudsoft is building a product that uses patented mediated routing IP from Enigmatec Corp, Johnson-Watt’s previous company. Called MarketMaker, the product will provide mediation as a service for service providers looking to get into the online auction, betting or brokerage and bookings businesses. “You want to be able to move mediation from one cloud to another to forefill orders  …  to move order books around to provide a guaranteed performance level and predictable behavior,” he said.

The name of the company, which implies it intends to be the Microsoft of the cloud, is no accident. “What is the Microsoft Office of the cloud … what are the essential business services you need from the cloud?” Johnston-Watt said. That’s where he believes cloud computing will get interesting.

Meanwhile, keeping it real, William Fellows, principal analyst and co-founder of the 451 Group  said cultural and organizational issues concerning power, trust, control and ownership are the biggest barriers to adoption of cloud services by enterprise IT. He also believes that the contractual language is not there yet for service-level agreements that meet compliance regulations. But Fellows insisted that  IT should not dismiss the trend. “Just understand what it’s good for.”


November 10, 2008  5:12 PM

EMC takes wraps off Atmos cloud plans

JoMaitland Jo Maitland Profile: JoMaitland

EMC Corp. says that it has a handful of Web 2.0 service providers using its new Atmos cloud-optimized storage (COS) product, but none that were ready to discuss it today. So for now, Atmos is an interesting technology announcement waiting for a reality check from customers.

And while EMC is focused on selling this to service providers initially, it does believe there’s an enterprise play down the line for media and entertainment, life sciences, and oil and gas companies interested in building private clouds. Somewhat confusingly, EMC also hinted at its plans eventually to become a service provider itself, which may cause some channel tension, but for now Atmos is a product only.

Here’s a taste of what EMC claims it will do. Atmos is a globally distributed file system (code-named Maui) that runs on purpose-built EMC hardware (code-named Hulk).

The software automatically distributes data, placing it on nodes across a network according to user-defined policies. These policies dictate what level of replication, versioning, compression, deduplication and disk drive spin-down a particular piece of data should have as it resides in the cloud. Depending on how important the information is, there might one, five, or 10 copies of it around the world, for example.

The closest thing out there today that resembles Atmos is Cleversafe.org.

Atmos also provides Web service application programming interfaces, including Represntational State Transfer and Simple Object Access Protocol, as well as the Common Internet File System and Network File System support for integration with file services; a unified name space, browser-based admin tools and multitenant support for multiple applications to be served from the same infrastructure without co-mingling data. And Simple Network Management Protocol support provides a plugin to existing reporting tools on top of the existing reports and alerts Atmos offers, according to EMC.

The software ships on purpose-built hardware available in 120TB, 240TB or 360TB configurations. [Editor’s note: The National Center for Atmospheric Research has an archive already several petabytes in size. It would need at least three of these boxes to contain just its existing data. In other words 360 TB is large, but not that large by today’s standards].

There’s also a fit with VMware as Atmos can run on a VMware image, although Mike Feinberg, the senior VP of the cloud infrastructure group at EMC, says users don’t need VMware to use Atmos.

EMC did not announce pricing details today either, except to say that it’ll be competitive will existing petabyte-scale JBOD-type offerings.


November 5, 2008  10:01 PM

Cloud computing allowed you to read an 1851 New York Times article online

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Nicholas Carr recounts the story of the New York Times trying to get its archives all online, and using cloud computing to do it.

The short version: NYT scanned all their articles in, resulting in four terabytes worth of TIFF files. They wanted to convert them all the PDFs but weren’t capable of doing it inhouse. So a software programmer at NYT sent them all to Amazon’s Simple Storage System, created some code with Amazon’s Elastic Compute Cloud to convert them to PDFs, and voila, a day later it was done.

The total cost for the computing job? Gottfrid told me that the entire EC2 bill came to $240. (That’s 10 cents per computer-hour times 100 computers times 24 hours; there were no bandwidth charges since all the data transfers took place within Amazon’s system – from S3 to EC2 and back.)

If it wasn’t for the cloud, Gottfrid told me, the Times may well have abandoned the effort. Doing the conversion would have either taken a whole lot of time or a whole lot of money, and it would have been a big pain in the ass. With the cloud, though, it was fast, easy, and cheap, and it only required a single employee to pull it off. “The self-service nature of EC2 is incredibly powerful,” says Gottfrid. “It is often taken for granted but it is a real democratizing force in lowering the barriers.”

Which brings Carr to his main point: Cloud computing will be important for what it will be able to do that already can’t be done. Up to now, most people are focusing on how to transfer their current IT infrastructure into the cloud. But cloud computing will make its mark by opening up avenues that were previously closed, or not even built yet.

But as one commenter stated, moving existing infrastructure is going to naturally be the first focus, as enterprises are worried about their current infrastructure, and not necessarily new tasks that the cloud could tackle. It will take a more long-term visionary within the company (such as the chief technology officer) to figure out which new trenches to build.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: