Microsoft kicks off its sold-out BUILD Conference in Anaheim next week with…well, we’re not sure exactly with what. The company has been particularly stingy with details on the event, failing to provide an agenda or even official list of speakers. So the rumor mill has been working overtime, with conjecture about what might be in store (ARM-based tablet giveaways, anyone?)
This much we know: Windows 8 will be revealed in some form, along with further details on Windows Server 8 following this week’s Reviewer’s Workshop in Seattle. Some specifics have emerged about both products, especially the client OS; the Microsoft team’s Building Windows 8 blog has provided a starting point for conversation about new features and strategies that will affect developers and administrators alike. With that in mind, here are some topics we do expect to come up at BUILD.
Is Windows 8 a game-changer, or just another Microsoft tease? The week should provide some insight to that question, particularly if a public beta release is offered. Some themes to watch for:
Taking on the tablets
Microsoft aims to compete in the tablet market with Windows 8. The new, optional touch-centric interface (which builds upon the Metro UI seen on Windows Phone 7) will purportedly scale across all devices and platforms, from phones on up. Rumored support for low-power ARM-based processors is also based on a desire to meet tablet demand.
Apps battle brewing
Microsoft confirmed that Hyper-V, previously limited to the server product, will come to Windows 8, along with VHDX, a new virtual hard drive format that allows for up to 16 TB of data. This brings up opportunities and questions alike for administrators, particularly those currently using VMware for virtualization. It could also bring MinWin, a stripped-down version of Windows, into the spotlight as an enabler for creating virtual appliances.
The Building Windows 8 blog has been fast and furious with updates relating to features in the new OS, including a ribbon-based Windows Explorer interface, ability to ISO and VHD file access, USB 3.0 support, and improved file management. These are not the sexiest offerings, so you can bet there will be additional reveals as the BUILD week goes on.
Windows Server 8
It’s inevitable that the server product would be stuck in the shadow of its client-facing sibling, but many BUILD attendees will be eager for information on Windows Server 8. Of the “100 or more new features” that were alluded to at the Worldwide Partner Conference in July, only a few have really been explored relating to Hyper-V Replica.
Sweet 16…or more?
As demonstrated at WPC 2011, admins will be able to manage 16 virtual processors per machine in this new version – which is apparently not the limit. But what is?
With its asynchronous virtual replication feature, Replica offers the ability to specify replication targets and snapshot intervals. This could have major implications for reducing server loads and increasing scalability.
Microsoft has touted its ‘private cloud’ solutions of late – with Hyper-V and System Center the products to make it happen. Expect this to be a consistent talking point throughout the week, prompting questions about application development as well as pricing and security risks.
But wait, there’s more…
- Azure. Several evangelists of Microsoft’s public cloud platform have been confirmed as BUILD speakers, and the company recently released an Azure toolkit for the mobile Android platform.
- Visual Studio 2012. What does the new version of Microsoft’s integrated development environment have in store?
- Fill in the blank…
What are you looking forward to hearing about at BUILD? Has the lack of information heightened your anticipation or lowered your expectations? Share your thoughts and predictions in the comments section, and look for our coverage from the event next week.
Does anyone really know how much server virtualization costs?
I’m not talking about the cost of the hardware, or the management overhead, or any of the ROI or TCO elements: just the straight up cost of buying the software that makes server virtualization work.
Microsoft implies that virtualization is free because it comes bundled with Windows Server 2008 R2, and with the upcoming Windows Server 8 where virtualization figures to play an even bigger role. But it is not really free if you have to pay for it, right? With Microsoft you have to invest in Windows Server along with the costs attached to associated technologies. But if you are buying Microsoft Windows licenses anyway, then Hyper-V at least comes across as free – or does it? And why does this matter?
Simply put, VMware is changing the licensing game with vSphere 5 and those licensing changes may make Hyper-V and other hypervisor offerings more attractive to cost conscious IT managers. These changes also change how vSphere is purchased. In some ways, it simplifies the licensing and in others it nakedly shows that VMware is seeking to increase revenue.
With vSphere 5, pricing is no longer based on physical memory; pricing has changed to reflect virtual memory. What’s more, VMware has switched to a licensing model that incorporates three tiers: Standard, which allows a total of 24 GB to be allocated to all virtual machines, and up to eight virtual CPUs per VM; Enterprise, which allows a total of 32 GB across all VMs and up to eight virtual CPUs per VM; and Enterprise Plus, which allows 48 GB across all VMs, and up to 32 virtual CPUs per VM. The prices are still based on single-socket, but there’s no longer a limit on the number of cores per socket.
This adds up to a significant change in licensing terms. Version 4 of vSphere encouraged a “scale-up” approach, which means buying systems with a few CPU sockets, massive amounts of memory and running lots of virtual machines on them. This, as opposed to a “scale out” approach where IT managers purchase more servers each with much less RAM.
The reason was because adding physical memory was “free,” at least until you hit 256 GB. But adding sockets (or new servers) costs money. The result was enterprises using VMware on two-socket servers with hundreds of gigabytes of RAM.
With Version 5 of VMware, the scale-up model is penalized. A case in point: A two-socket, six-core per socket, 256 GB machine used to require two Enterprise licenses. But now the price changes are based on all of the RAM, so you now need eight Enterprise licenses. Simply put, vSphere 5 will need four times as many licenses, and cost four times as much as Version 4.
That makes free technologies such as Hyper-V all that much more attractive. The real question is: will IT abandon vSphere in favor of Hyper-V? Probably not. If a company already has invested in vSphere, they will probably swallow the price changes and stick with what they know. On the other hand, when Windows Server 8 hits the streets, companies looking to migrate may seriously consider going with Hyper-V.
Only time will tell. However, it certainly doesn’t hurt to take a closer look at Hyper-V to see if it really offers something for nothing.
Frank Ohlhorst is an award-winning technology journalist, professional speaker and IT business consultant with over 25 years of experience in the technology arena. He has written for several technology publications, including TechTarget, ComputerWorld and PCWorld. Ohlhorst was also the Executive Technology Editor for Ziff Davis Enterprise’s eWeek. You can contact him at email@example.com
Microsoft last week officially released the latest beta version of System Center Operations Manager, its health and performance monitoring product. System Center Operations Manager 2012 beta is now available for download. According to the accompanying Microsoft FAQ, SCOM 2012 offers
· A single, consistent view across datacenters and clouds with an option for customizable dashboard templates;
· AVIcode technology for monitoring .NET applications;
· Support for monitoring heterogeneous environments including Windows, Linux, and UNIX servers;
· Integrated network device monitoring and alerts;
· A simplified management structure with support for automatic failover.
Eager system administrators and bloggers are wasting little time testing these new features looking for the soft spots. One such tester expressed some disappointment that the latest release doesn’t feel significantly different from SCOM 2007. “I had expected an updated console and vastly improved notifications management, but as far as I can see, a lot of the code is the same,” said Trond Hindenes, a senior consultant at AVAN in Norway. “This upgrade might not as be as ‘no-brainer’ as I had hoped.”
Still, Hindenes notes some positive updates to SCOM 2012, including the addition of deep monitoring of .NET-based Web applications thanks to Microsoft’s acquisition of AVIcode. In a recent post for GotchaHunterAlex Shlega writes that while deployment of.NET monitoring is now easier, the limited server-side configuration options make “application deep-dive troubleshooting” a difficult proposal. In response, Daniele Muscetta of Microsoft tweeted that more options will be visible in the RC version.
Another significant change in SCOM 2012 is there is no longer a root management server like there was in Operations Manager 2007. Instead, all servers are peers. Hindenes suggests this will mean better scalability, enabling “easier high-availability configuration without configuring Windows clustering for the SCOM infrastructure.” Microsoft MVP Graham Davies offers more background on the change on the SystemCenterSolutions.com blog.
The management server change is one of several “little things that make [SCOM 2012] more scalable and reliable,” Davies noted in a recent phone interview. Still, while features like new dashboards, network device monitoring and the ability to monitor non-Windows environments are welcome, “they don’t make for a revolutionary release like the jump from MOM 2005 to SCOM 2007,” Davies said, warning that companies with legacy systems must be careful when planning for such a transition, as it is important to ensure that their existing environments meet the new supported configurations. For example, “there are over 80 new PowerShell cmdlets, so while existing scripts will work on SCOM 2012, enterprises might want to upgrade them to gain the new functionality. Additionally, there is no support for Windows 2000 agents and the integrated AVIcode (now APM) monitoring only supports web applications on IIS7.”
SCOM 2012 may look significantly different in its final form (RTM is estimated for the latter half of 2012), and we’ll plan to post updates as that release becomes more clear.
Let us know what you think about this story; email Ben Rubenstein at firstname.lastname@example.org.
Microsoft finally dragged Windows Server 8 out into the light at its Worldwide Partner Conference last week, offering up a brief demo and showing off a few of the “100 or more” new features expected to be in the product. What was clear from this glimpse is that Microsoft will place a heavier emphasis on virtualization in the new release.
For instance, it showed off Hyper-V Replica which offers support for at least 16 virtual processors. The new technology also supplies asynchronous virtual machine replication to offsite locations for things such as mission critical data contained in SQL Server databases. This is a direct response to users who have complained about the inability to properly scale the product to handle larger workloads.
Company officials made it a point to say Hyper-V Replica will work with Remote FX and Dynamic Memory, two virtualization capabilities included in Windows Server Service Pack 1 released earlier this year. Windows Server 8 will remain an important building block on which corporate users can build private clouds, company officials noted, which implies virtualization will play a role here, too.
According to Jeff Woolsey, the Principal Program Manager Lead for Windows Server Virtualization who conducted the demo, “Windows Server 8 will be capable of delivering, ‘massive, massive scale’ as well as ‘unlimited replication’ right out of the box in the box,” he said.
Woolsey talked about how Hyper-V Replica can also improve fault tolerance, while side-stepping much of the additional hardware and software costs associated with upgrading to that capability.
While it was good to see the Microsoft at least begin to talk about the product, which Microsoft CEO Steve Ballmer said in his conference keynote now owns 76% of the server market, you have to wonder why it took so long.
According to one Windows Server beta tester I talked to this week, that reluctance may center around not wanting to tip its hand to early to its archrival in the virtualization market, VMware.
“They are making such a big play in (server) virtualization and virtual desktops, they didn’t want to give VMware too much of a head’s up. They are also trying to figure out how to sustain their licensing model as well as take on Citrix and VMware. There is a lot they still have to work through before they roll this out,” he said.
There might be something to that. On the same day Microsoft demoed the Hyper-V Replica-Windows Server 8 combination, VMware trotted out VSphere 5. The new version, perhaps not coincidentally, has with the ability to support virtual machines about four times more powerful than its predecessor, as well as containing improvements to all of its virtualization capabilities.
Unfortunately, VMware also announced a new licensing plan for vSphere 5 that caused a stir among its users. The plan, instead of being based on physical CPUs and physical RAM per server, is based on per-CPU licensing. Many VMware users believe this switch will cost them significantly more.
The new licensing could slow adoption of VMware’s core offerings and, at least for now, tilts the playing field in Microsoft’s favor with its free hypervisor built into Windows Server. But Redmond must be careful not to step on a landmine with a new licensing plan that takes away its cost advantage.
There is no word on when Microsoft might deliver a meaningful code drop of Windows Server 8 to developers. We’ll have to wait until Microsoft’s Build conference in mid-September to get more technical details.
Let us know what you think about this story; email Ed Scannell at email@example.com.
Microsoft’s Azure platform has generally received good reviews from many IT shops and third party developers and appears to be gaining mindshare despite a band of fierce competitors including Google, IBM and Amazon.
Redmond officials, who mentioned to me at Tech Ed last week they now have 30,000 Azure customers, can’t afford to take anything for granted. They will have to stay aggressive and focused if they hope to maintain the platform’s momentum in a fast moving market.
The company could get some help sustaining this momentum from an unexpected source — Windows Server HPC 2008 R2. Typically focused on the higher-end technical computing markets, some company officials want to promote and make the platform available to a much broader IT and developer audience. And by getting IT shops developers to deliver a more diverse set of commercial apps for HPC, they believe it can drive higher usage of Azure.
“There are more than a few Fortune companies and developers that can benefit from (HPC’s) parallel and clustering capabilities. These apps would be a natural fit for Azure,” said one Microsoft official who preferred not to be quoted at last week’s show.
A couple of months ago Bill Hilf, the General Manager of Microsoft’s Technical Computing Group, said he believes HPC R2 applications will drive higher usage of Azure across many IT data centers. He went as far as to say that technical computing workloads and other compute intensive applications would prove to be the killer app for Azure.
I’m not sure I would go that far, but it gives you an idea of what Microsoft’s hopes and dreams are for HPC R2 as a general purpose mainframe in the cloud.
Further evidence Microsoft wants to lift HPC R2 out of its niche and into the much bigger cloud arena was its reorganization earlier this month. That reorg moved the HPC R2 team into the Azure organization run by Bill Laing.
Also earlier this month, Microsoft delivered Service Pack 2 for HPC R2 that, not surprisingly given the above evidence, contains several new features pertaining to Azure including the ability to add Azure VM roles to clusters and the ability to add MPI-based jobs on Azure nodes.
Another bridge Microsoft will build to connect HPC R2 and Azure is Dryad — a competing technology to Google’s MapReduce and Apache Hadoop. Dryad helps developers create distributed programs that can be used in both small clusters up to large datacenters. The company hopes to deliver Dryad for HPC R2 by year’s end.It will be interesting to see how many IT shops with a number of compute intensive workloads Microsoft can attract to its cloud strategy using HPC R2 as the incentive.
If you are using Windows Server HPC 2008 R2 in your datacenter to host cloud applications, or just exploring some possibilities, let me know.
We have heard this talk before, of course: Microsoft’s power and influence has peaked and the company is on a slow slide into irrelevancy.
The latest chatter about this started late last week with Roger McNamee, co-founder of Elevation Partners, in an appearance on CNBC. Asked what companies and technologies would dominate the Internet over the next few years, he said the coming tidal wave of Apple iPads and Android-based smart phones would sink the fortunes of PCs and, along with them, Windows.
“For Microsoft Windows, this is the cycle where it stops growing,” he said. “iPads and smartphones allow corporations to trade down and eliminate thousands of dollars per year in supporting Windows desktops. This is the year (2011) Windows has fallen below 50% of Internet-connected devices, down from 97% 10 years ago.”
Corporations could save as much as $100 billion by eliminating the support costs of desktop PCs and laptops over the next few years, McNamee said, with a lot of those savings expected to be used to purchase tablets and smartphones.
But to predict the rapid rise iPads and smart phones alone can crumble the Windows franchise in the next phase of the Internet, thereby relegating Microsoft to a second rate power, is a bit short sighted.
True, Microsoft doesn’t figure to have a strong offering to go up against the iPad and Android-based phones any time soon (sorry Windows Mobile). But what it does have are tens of thousands of corporations committed to long-term licensing agreements for its core Windows products — products corporations have invested billions of dollars in training users over the decades. And, oh yeah, the company has tens of billions in cash to buy its way into the next generation of Internet computing.
But here is the real threat to Microsoft: Microsoft.
Over the past few years Microsoft has steadily increased its financial and philosophical commitment to establishing a meaningful cloud strategy, introducing significant platforms such as Azure and Office 365. The company will eventually deliver cloud versions of its core Windows products if it hopes to keep pace with its major competitors. If it continues along this path Redmond could cannibalize sales of its existing desktop and server products that generate the bulk of its $67 billion in revenues.
In a recent conversation with one of the most respected (if not most entertaining) Microsoft observers, Mark Minasi, he summed it up this way:
“Microsoft is making this huge on the cloud, which could prove to be a Windows desktop killer. They want to have both Azure for its apps stuff as well as keeping its current platforms for hosted or SaaS kinds of things like Exchange and SharePoint. It’s like they are carrying their favorite son in hopes of adopting a bigger, better one. This could make for an interesting Harvard Business Review case study in the future.”
If the Microsoft visionaries have worked out a smooth transition between its on premises/hosted/SaaS-based products and their upcoming cloud-based versions that doesn’t devastate its revenue flow, they’re not saying. And perhaps it is unfair to expect them to offer up details of such a transition this early on. But with both cloud computing technologies and user interest in them growing rapidly, Microsoft should shed some light on this sooner rather than later.
For now, I think it’s fair to assume that iPads and smartphones alone won’t topple one of the elite suppliers of enterprise technologies any time soon. Only Microsoft can do that.
Ed Scannell is Executive Editor with SearchDatacenter.com. He can be contacted at firstname.lastname@example.org.
For a recent article on Opalis and third-party integration, I spoke with Microsoft director Robert Reynolds to learn about the company’s strategy for the automation technology going forward. The story included a few choice details from Reynolds regarding Microsoft’s reliance on the IT community for the development of Opalis integration packs and how the new Opalis branding will be unveiled at Microsoft Management Summit (MMS) 2011.
Those quotes were just a sampling from a much longer conversation, however, so I thought I’d pull out a few interesting notes that didn’t make it into the original article here.
It sounds like Windows Intune is just about ready for prime time.
Microsoft’s new technology which it describes as “an end-to-end Microsoft solution that brings together Windows cloud services for PC management and endpoint protection” is set for commercial availability on March 23 – right smack dab in the middle of Microsoft Management Summit (MMS) 2011. A website dedicated to Intune is now up where folks can learn more about it before MMS and presumably test and purchase it once it ships.
You have to hand it to Quest Software CEO Doug Garn — the guy is a great sport.
Recently, my colleagues Brian Madden, Gabe Knuth and Bridget Botelho recorded a video for SearchVirtualDesktop.com. The purpose of the video was to preview what’s ahead for desktop virtualization in 2011, and the conversation touched on several different topics.
When Quest made its way into the convo, Brian (always willing to share his thoughts on … well … anything) made a crack about Quest’s logo being somewhat outdated (check in at about the 12:20 mark): “Can I say this directly to you, Quest? Can you change your logo and update it so it’s not a 1992 logo?”
Not one to take those kinds of remarks lying down, Garn and the folks at Quest put together a video of their own especially for Brian. Instead of ruining it for you, click play below to see Garn’s response. You have to respect folks that don’t take themselves too seriously these days.
It sounds like Microsoft is finally close to announcing RTM for both service packs soon, with MSDN availability set for Feb. 16, with a Feb. 22 release date for the Web.
ORIGINAL POST 2/02/2011
320 days. That’s how long it’s been since news first broke of the initial service packs for Windows 7 and Windows Server 2008 R2. Yet while general availability was expected for early Q1, January has come and gone with SP1 still officially under wraps.
So what’s going on? Back on January 14, a Russian Microsoft blog posted that both service packs had been shipped to OEMs, leading many to believe that a release to MSDN was imminent. Not long after the news spread like wildfire, the same Russian site posted an update stating that SP1 was in fact not released to OEMs, and that the original post included some “inaccuracies”. (Amusingly, the retraction is the only part of the entry that’s in English.)