Microsoft last week officially released the latest beta version of System Center Operations Manager, its health and performance monitoring product. System Center Operations Manager 2012 beta is now available for download. According to the accompanying Microsoft FAQ, SCOM 2012 offers
· A single, consistent view across datacenters and clouds with an option for customizable dashboard templates;
· AVIcode technology for monitoring .NET applications;
· Support for monitoring heterogeneous environments including Windows, Linux, and UNIX servers;
· Integrated network device monitoring and alerts;
· A simplified management structure with support for automatic failover.
Eager system administrators and bloggers are wasting little time testing these new features looking for the soft spots. One such tester expressed some disappointment that the latest release doesn’t feel significantly different from SCOM 2007. “I had expected an updated console and vastly improved notifications management, but as far as I can see, a lot of the code is the same,” said Trond Hindenes, a senior consultant at AVAN in Norway. “This upgrade might not as be as ‘no-brainer’ as I had hoped.”
Still, Hindenes notes some positive updates to SCOM 2012, including the addition of deep monitoring of .NET-based Web applications thanks to Microsoft’s acquisition of AVIcode. In a recent post for GotchaHunterAlex Shlega writes that while deployment of.NET monitoring is now easier, the limited server-side configuration options make “application deep-dive troubleshooting” a difficult proposal. In response, Daniele Muscetta of Microsoft tweeted that more options will be visible in the RC version.
Another significant change in SCOM 2012 is there is no longer a root management server like there was in Operations Manager 2007. Instead, all servers are peers. Hindenes suggests this will mean better scalability, enabling “easier high-availability configuration without configuring Windows clustering for the SCOM infrastructure.” Microsoft MVP Graham Davies offers more background on the change on the SystemCenterSolutions.com blog.
The management server change is one of several “little things that make [SCOM 2012] more scalable and reliable,” Davies noted in a recent phone interview. Still, while features like new dashboards, network device monitoring and the ability to monitor non-Windows environments are welcome, “they don’t make for a revolutionary release like the jump from MOM 2005 to SCOM 2007,” Davies said, warning that companies with legacy systems must be careful when planning for such a transition, as it is important to ensure that their existing environments meet the new supported configurations. For example, “there are over 80 new PowerShell cmdlets, so while existing scripts will work on SCOM 2012, enterprises might want to upgrade them to gain the new functionality. Additionally, there is no support for Windows 2000 agents and the integrated AVIcode (now APM) monitoring only supports web applications on IIS7.”
SCOM 2012 may look significantly different in its final form (RTM is estimated for the latter half of 2012), and we’ll plan to post updates as that release becomes more clear.
Let us know what you think about this story; email Ben Rubenstein at email@example.com.
Microsoft finally dragged Windows Server 8 out into the light at its Worldwide Partner Conference last week, offering up a brief demo and showing off a few of the “100 or more” new features expected to be in the product. What was clear from this glimpse is that Microsoft will place a heavier emphasis on virtualization in the new release.
For instance, it showed off Hyper-V Replica which offers support for at least 16 virtual processors. The new technology also supplies asynchronous virtual machine replication to offsite locations for things such as mission critical data contained in SQL Server databases. This is a direct response to users who have complained about the inability to properly scale the product to handle larger workloads.
Company officials made it a point to say Hyper-V Replica will work with Remote FX and Dynamic Memory, two virtualization capabilities included in Windows Server Service Pack 1 released earlier this year. Windows Server 8 will remain an important building block on which corporate users can build private clouds, company officials noted, which implies virtualization will play a role here, too.
According to Jeff Woolsey, the Principal Program Manager Lead for Windows Server Virtualization who conducted the demo, “Windows Server 8 will be capable of delivering, ‘massive, massive scale’ as well as ‘unlimited replication’ right out of the box in the box,” he said.
Woolsey talked about how Hyper-V Replica can also improve fault tolerance, while side-stepping much of the additional hardware and software costs associated with upgrading to that capability.
While it was good to see the Microsoft at least begin to talk about the product, which Microsoft CEO Steve Ballmer said in his conference keynote now owns 76% of the server market, you have to wonder why it took so long.
According to one Windows Server beta tester I talked to this week, that reluctance may center around not wanting to tip its hand to early to its archrival in the virtualization market, VMware.
“They are making such a big play in (server) virtualization and virtual desktops, they didn’t want to give VMware too much of a head’s up. They are also trying to figure out how to sustain their licensing model as well as take on Citrix and VMware. There is a lot they still have to work through before they roll this out,” he said.
There might be something to that. On the same day Microsoft demoed the Hyper-V Replica-Windows Server 8 combination, VMware trotted out VSphere 5. The new version, perhaps not coincidentally, has with the ability to support virtual machines about four times more powerful than its predecessor, as well as containing improvements to all of its virtualization capabilities.
Unfortunately, VMware also announced a new licensing plan for vSphere 5 that caused a stir among its users. The plan, instead of being based on physical CPUs and physical RAM per server, is based on per-CPU licensing. Many VMware users believe this switch will cost them significantly more.
The new licensing could slow adoption of VMware’s core offerings and, at least for now, tilts the playing field in Microsoft’s favor with its free hypervisor built into Windows Server. But Redmond must be careful not to step on a landmine with a new licensing plan that takes away its cost advantage.
There is no word on when Microsoft might deliver a meaningful code drop of Windows Server 8 to developers. We’ll have to wait until Microsoft’s Build conference in mid-September to get more technical details.
Let us know what you think about this story; email Ed Scannell at firstname.lastname@example.org.
Microsoft’s Azure platform has generally received good reviews from many IT shops and third party developers and appears to be gaining mindshare despite a band of fierce competitors including Google, IBM and Amazon.
Redmond officials, who mentioned to me at Tech Ed last week they now have 30,000 Azure customers, can’t afford to take anything for granted. They will have to stay aggressive and focused if they hope to maintain the platform’s momentum in a fast moving market.
The company could get some help sustaining this momentum from an unexpected source — Windows Server HPC 2008 R2. Typically focused on the higher-end technical computing markets, some company officials want to promote and make the platform available to a much broader IT and developer audience. And by getting IT shops developers to deliver a more diverse set of commercial apps for HPC, they believe it can drive higher usage of Azure.
“There are more than a few Fortune companies and developers that can benefit from (HPC’s) parallel and clustering capabilities. These apps would be a natural fit for Azure,” said one Microsoft official who preferred not to be quoted at last week’s show.
A couple of months ago Bill Hilf, the General Manager of Microsoft’s Technical Computing Group, said he believes HPC R2 applications will drive higher usage of Azure across many IT data centers. He went as far as to say that technical computing workloads and other compute intensive applications would prove to be the killer app for Azure.
I’m not sure I would go that far, but it gives you an idea of what Microsoft’s hopes and dreams are for HPC R2 as a general purpose mainframe in the cloud.
Further evidence Microsoft wants to lift HPC R2 out of its niche and into the much bigger cloud arena was its reorganization earlier this month. That reorg moved the HPC R2 team into the Azure organization run by Bill Laing.
Also earlier this month, Microsoft delivered Service Pack 2 for HPC R2 that, not surprisingly given the above evidence, contains several new features pertaining to Azure including the ability to add Azure VM roles to clusters and the ability to add MPI-based jobs on Azure nodes.
Another bridge Microsoft will build to connect HPC R2 and Azure is Dryad — a competing technology to Google’s MapReduce and Apache Hadoop. Dryad helps developers create distributed programs that can be used in both small clusters up to large datacenters. The company hopes to deliver Dryad for HPC R2 by year’s end.It will be interesting to see how many IT shops with a number of compute intensive workloads Microsoft can attract to its cloud strategy using HPC R2 as the incentive.
If you are using Windows Server HPC 2008 R2 in your datacenter to host cloud applications, or just exploring some possibilities, let me know.
We have heard this talk before, of course: Microsoft’s power and influence has peaked and the company is on a slow slide into irrelevancy.
The latest chatter about this started late last week with Roger McNamee, co-founder of Elevation Partners, in an appearance on CNBC. Asked what companies and technologies would dominate the Internet over the next few years, he said the coming tidal wave of Apple iPads and Android-based smart phones would sink the fortunes of PCs and, along with them, Windows.
“For Microsoft Windows, this is the cycle where it stops growing,” he said. “iPads and smartphones allow corporations to trade down and eliminate thousands of dollars per year in supporting Windows desktops. This is the year (2011) Windows has fallen below 50% of Internet-connected devices, down from 97% 10 years ago.”
Corporations could save as much as $100 billion by eliminating the support costs of desktop PCs and laptops over the next few years, McNamee said, with a lot of those savings expected to be used to purchase tablets and smartphones.
But to predict the rapid rise iPads and smart phones alone can crumble the Windows franchise in the next phase of the Internet, thereby relegating Microsoft to a second rate power, is a bit short sighted.
True, Microsoft doesn’t figure to have a strong offering to go up against the iPad and Android-based phones any time soon (sorry Windows Mobile). But what it does have are tens of thousands of corporations committed to long-term licensing agreements for its core Windows products — products corporations have invested billions of dollars in training users over the decades. And, oh yeah, the company has tens of billions in cash to buy its way into the next generation of Internet computing.
But here is the real threat to Microsoft: Microsoft.
Over the past few years Microsoft has steadily increased its financial and philosophical commitment to establishing a meaningful cloud strategy, introducing significant platforms such as Azure and Office 365. The company will eventually deliver cloud versions of its core Windows products if it hopes to keep pace with its major competitors. If it continues along this path Redmond could cannibalize sales of its existing desktop and server products that generate the bulk of its $67 billion in revenues.
In a recent conversation with one of the most respected (if not most entertaining) Microsoft observers, Mark Minasi, he summed it up this way:
“Microsoft is making this huge on the cloud, which could prove to be a Windows desktop killer. They want to have both Azure for its apps stuff as well as keeping its current platforms for hosted or SaaS kinds of things like Exchange and SharePoint. It’s like they are carrying their favorite son in hopes of adopting a bigger, better one. This could make for an interesting Harvard Business Review case study in the future.”
If the Microsoft visionaries have worked out a smooth transition between its on premises/hosted/SaaS-based products and their upcoming cloud-based versions that doesn’t devastate its revenue flow, they’re not saying. And perhaps it is unfair to expect them to offer up details of such a transition this early on. But with both cloud computing technologies and user interest in them growing rapidly, Microsoft should shed some light on this sooner rather than later.
For now, I think it’s fair to assume that iPads and smartphones alone won’t topple one of the elite suppliers of enterprise technologies any time soon. Only Microsoft can do that.
Ed Scannell is Executive Editor with SearchDatacenter.com. He can be contacted at email@example.com.
For a recent article on Opalis and third-party integration, I spoke with Microsoft director Robert Reynolds to learn about the company’s strategy for the automation technology going forward. The story included a few choice details from Reynolds regarding Microsoft’s reliance on the IT community for the development of Opalis integration packs and how the new Opalis branding will be unveiled at Microsoft Management Summit (MMS) 2011.
Those quotes were just a sampling from a much longer conversation, however, so I thought I’d pull out a few interesting notes that didn’t make it into the original article here.
It sounds like Windows Intune is just about ready for prime time.
Microsoft’s new technology which it describes as “an end-to-end Microsoft solution that brings together Windows cloud services for PC management and endpoint protection” is set for commercial availability on March 23 – right smack dab in the middle of Microsoft Management Summit (MMS) 2011. A website dedicated to Intune is now up where folks can learn more about it before MMS and presumably test and purchase it once it ships.
You have to hand it to Quest Software CEO Doug Garn — the guy is a great sport.
Recently, my colleagues Brian Madden, Gabe Knuth and Bridget Botelho recorded a video for SearchVirtualDesktop.com. The purpose of the video was to preview what’s ahead for desktop virtualization in 2011, and the conversation touched on several different topics.
When Quest made its way into the convo, Brian (always willing to share his thoughts on … well … anything) made a crack about Quest’s logo being somewhat outdated (check in at about the 12:20 mark): “Can I say this directly to you, Quest? Can you change your logo and update it so it’s not a 1992 logo?”
Not one to take those kinds of remarks lying down, Garn and the folks at Quest put together a video of their own especially for Brian. Instead of ruining it for you, click play below to see Garn’s response. You have to respect folks that don’t take themselves too seriously these days.
It sounds like Microsoft is finally close to announcing RTM for both service packs soon, with MSDN availability set for Feb. 16, with a Feb. 22 release date for the Web.
ORIGINAL POST 2/02/2011
320 days. That’s how long it’s been since news first broke of the initial service packs for Windows 7 and Windows Server 2008 R2. Yet while general availability was expected for early Q1, January has come and gone with SP1 still officially under wraps.
So what’s going on? Back on January 14, a Russian Microsoft blog posted that both service packs had been shipped to OEMs, leading many to believe that a release to MSDN was imminent. Not long after the news spread like wildfire, the same Russian site posted an update stating that SP1 was in fact not released to OEMs, and that the original post included some “inaccuracies”. (Amusingly, the retraction is the only part of the entry that’s in English.)
ZDNet UK just posted some screenshots from the SCVMM 2012 demos last week, for anyone interested in a few visuals of the interface in action.
ORIGINAL POST 1/25/2011
Those who enjoyed Microsoft’s cloud push last year are really going to like 2011. Many of the technologies that have been discussed over the past year are slowly beginning to see the light of day, starting with next version of System Center Virtual Machine Manager (SCVMM).
During a live meeting this week, Microsoft product manager Kenon Owens demoed the yet-to-be released SCVMM 2012 (previously dubbed “v.Next” before TechEd Europe in November). Microsoft is positioning SCVMM as a key component for organizations looking to private clouds, and Owens broke down some of the ways the software can be used to control infrastructures and services in a cloud-based environment.
Is 5:00 p.m. on a Friday too late for a little Hyper-V-centric news? Nah.
Microsoft’s Michael Kleef just posted a short update on his personal TechNet blog about a new discovery involving Hyper-V R2 SP1. Apparently, the latest version of Hyper-V will now support up to 12 virtual machines per logical processor, up from the previous max of eight, but only for Windows 7 SP1 guests.
In other words, admins can host up to 12 guest operating systems per logical processor on a Hyper-V host as long as each of those guest OSes are running Windows 7 SP1. Otherwise, the ratio of VMs/logical processor remains at 8:1.