Posted by: Jack Vaughan
cloud computing, Microsoft
By Rob Barry
Ahead of its anticipated formal unveiling of the Azure cloud platform at its Professional Developers Conference (PDC) next month, Microsoft is firming up tool and platform details on its version of cloud architecture.
Things are warming up on the cloud for Microsoft. As part of a Visual Studio 2010 tools beta announcement, Microsoft this week divulged aggressive Azure cloud pricing for developers subscribing to its MSDN developers services.
As part of the VS 2010 beta announcement, Microsoft’s Soma Somasegar, senior vice president of the Developer Division, said the company will stage a promotion making available free Windows Azure Platform cloud computing use for MSDN Premium and BizSpark customers following Windows Azure commercial availability.
“To kick start developers on this powerful platform, subscribers will get 750 free compute hours per month for eight months,” Somasegar wrote on his blog.
Last week Microsoft froze some of its cloud DB features, as it published a Community Technology Preview (Number 2) for its SQL Azure Database ahead of PDC.
Meanwhile, Microsoft’s best and brightest design gurus dedicated nearly a whole day of discussion on Azure architecture at the company’s Patterns & Practices (P&P) 2009 Summit.
In a P&P session about designing for Azure, Microsoft Technical Strategist Steve Marx had a number of tips for developers. One of the biggest initial decisions Azure users will have to make is how to handle storage. Here the decision is whether to stick with a familiar relational database, in the form of the SQL Azure Database, or to make the leap into distributed non-relational storage with Windows Azure Storage.
“The obvious advantage of SQL Azure is that it’s SQL Server and you don’t have to manage it,” said Marx. “Pointing SQL Server to SQL Azure is often just a config change.”
A downside is that this may run slower than the non-relational alternative.
Marx said the way SQL Azure partitions data allows developers to set a size for pockets of data that are then stored in various places throughout the system. It will be easier for the established SQL Server community to use, but is nothing revolutionary.
But Marx suggested many users will find benefit in Azure Storage. With this approach, developers can leverage billions of rows, automatic load management, a flexible schema and optimistic concurrency.
“Windows Azure Storage has blobs, queues and tables, which is more like the Google Big Table system than a database,” said Marx. “I think ‘Windows Azure tables’ just scales better [than SQL Azure] in general.”
To get the most efficient use and best ROI out of Azure, Marx offered up two tips.
*Denormalize your data. Where, in a standard database, normalization helps keep the order, denormalization in a distributed file system helps to avoid cross-partition queries. To do this, replicate properties in relationships, duplicate data for multiple indexes and maintain aggregates.
*Offline calculation. It is beneficial to get in the practice of performing expensive queries offline. Marx recommended maintaining aggregates asynchronously and pre-computing when possible.
“Windows Azure is not just for Web applications,” Marx said, “just like the cloud is not just for Web applications. It’s great for distributing large workloads.”
In such cases, MapReduce is the model many point to. It helps to have a “reduce” step built into one’s system where, after all the computations are carried out, aggregates are produced automatically.
*If you plan to use Azure on a service oriented architecture, it is important to allow it to function both synchronously and asynchronously. Services behave differently depending on what other services they must access, and it is not uncommon to have a mixture of both types.”
Includes reporting by Jack Vaughan