My colleague Bridget Botelho has written a more detailed rundown of the SP1, with insight from Directions on Microsoft’s Rob Sanfilippo.
ORIGINAL POST (8/25/2010)
Let the upgrades begin? Microsoft has officially launched the first service pack for Exchange Server 2010, nearly three months after the beta was announced at TechEd North America this year.
The update includes archiving and discovery improvements along with various UI enhancements to the Exchange Management Console and Control Panel. Customers can also expect new mobile management features and a faster Outlook Web App (OWA) reading experience, according to Microsoft.
The archiving enhancements include the ability to import historical data from .PST files and automated deletion and archiving of email, along with other capabilities. Microsoft’s Michael Atalla posted a pretty detailed rundown of what’s new with SP1 back in April.
So what’s missing?
The next version of System Center Configuration Manager (SCCM) 2007 should officially ship soon, with the release candidate approved earlier this month. The update is relatively minor, however, when compared to the full-blown revamp on the way with SCCM v.Next, for which an initial beta is already available.
I spoke recently with Mark Mears, a systems specialist of Windows design and operations for Macy’s Inc., to get his thoughts on the new power management features for SCCM 2007 R3. (There are a few other updates as well, but the new power consumption reports are the big news.) He said he wasn’t surprised by Microsoft putting out an R3 for SCCM, even with the v.Next already available.
Less than two months ago, we posted an article by Microsoft MVP Gary Olsen on the topic of domain controller virtualization. The article was actually a follow-up to something he wrote way back in 2006, where he pondered if virtualizing DCs was really a good idea.
At the time of the first article (which predates Hyper-V), questions about I/O bottlenecks, security and even Microsoft’s questionable support of DC virtualization made the whole concept seem somewhat dicey. The recommendation was that while it was possible (Microsoft did have “how to” documentation, after all), virtual domain controllers should really only be implemented on a limited, non-critical basis.
Obviously, virtualization is a lot more popular today than it was back then, and continued technological advances have made it arguably the driving force in today’s IT market. So of course everyone is all together when it comes to virtualizing DCs now, right? Wrong.
While Microsoft has yet to make an “official” announcement, it seems most have heard by now that Windows legend Mark Russinovich has joined the company’s Windows Azure team. ZDNet’s Mary Jo Foley broke the story late last week via Microsoft evangelist Matthijs Hoekstra’s Twitter page, and since then the Internet has lit up with posts about the news.
So what does it all mean? Well at the very least, it likely adds more credibility to Microsoft’s overall cloud platform. Russinovich is an extremely well-respected figure in the IT community, having cofounded Winternals Software and the immensely popular Sysinternals website. He’s been working as a technical fellow for Microsoft since 2006 – most recently with the Core OS Division team — and has been described by many as knowing Windows better than those who created it.
The latest addition to Microsoft’s Windows Azure platform promises to give customers greater control over cloud-based data, though the release isn’t likely to mean much to most IT professionals right away.
The company announced its Windows Azure Platform Appliance at the Worldwide Partner conference (WPC) in Washington, D.C. this week. The appliance, which has also been dubbed Azure-in-a-box, is designed to let organizations deploy Windows Azure in their own data centers. It includes the company’s SQL Azure Database and will run on Microsoft specified hardware, storage and network configurations.
Amidst all the talk about dynamic memory and Windows Server 2008 R2 SP1, one recent virtualization update has managed to fly under the radar – and it actually has nothing to do with the upcoming service pack. Though few seemed to notice, Microsoft recently announced that it has increased the number of virtual machines supported by Hyper-V R2 in a cluster.
As of R2, Hyper-V supported up to 384 VMs per server. That number dropped to 64, however, if those virtual machines were running in a cluster. Microsoft has updated Hyper-V to allow clustered nodes to also support a maximum of 384 virtual machines. The total number of VMs per cluster has also been increased to 1,000 (up from an initial limit of 960).
The third service pack for Exchange Server 2007 was made available for download this week – and not a moment too soon. The SP3 has been updated to allow Exchange 2007 to be installed on Windows Server 2008 R2 operating systems.
The release comes nearly one year after news first broke that Exchange 2007 would not be supported on R2, which naturally caused more than a few raised (and furrowed) eyebrows. At the time, the word from Microsoft was that the company was focused primarily on Exchange 2010, and testing for R2 support would only push back the release of Exchange 2007 SP2. Microsoft also noted that based on customer feedback, the ability to “support Windows Server 2008 R2 domain controllers in an existing Exchange 2007 deployment” was sufficient.
Just a quick post today. Microsoft released a new hotfix this month in response to some startup hang issues involving servers hosting virtual machines. Basically, some users running Windows Server 2008 R1 or R2 with Hyper-V have experienced increasingly slow OS startup times following a disk backup.
Systems engineer and IT writer Rob McShinsky said he has dealt with the issue for about a year now, which he said is caused by a “registry bloat” of past snapshots on the host machine. He noted that the startup lag would get longer and longer with each backup since more and more entries were being added to the registry.
You might not believe this, but one thing I took away from TechEd 2010 last week is that Microsoft reps seem to have a real issue with the term “cloud.” This point was brought up in nearly every conversation I had – how “the cloud” is too broad a term, and what the company is really invested in is cloud computing.
Judging by the number of times I heard some variation of this, it’s clear that this is something that Microsoft is determined to drive home. In fact, the point was reinforced several times during the opening keynote, where the focus was on extending the data and tools that IT professionals use on premise to a cloud computing environment.
While this was a major topic at Microsoft Management Summit (MMS) 2010 in regards to System Center, it was even more so at TechEd, where Active Directory was added to the mix. I sat down with Microsoft’s Justin Graham not long after the keynote, and while we spent a good amount of time discussing what to expect from Windows 7 and Server 2008 R2 SP1, we also chatted a little about Microsoft’s plans for AD in the cloud and the company’s overall strategy.
It’s very warm and muggy here in New Orleans for Microsoft TechEd 2010, but it was cool, comfortable (and initially, loud) for the opening keynote with Microsoft president of server and tools business Bob Muglia and other company reps.
A lot of topics were covered during the 1 1/2-plus hour presentation, but as expected, the cloud dominated most of the conversation. The big theme was centered on how IT professionals can extend the tools and data they currently use on premise (Active Directory, System Center products, etc.) to cloud computing environments. Muglia once again stressed that Mirosoft is fully-committed to the cloud, and that while it will ultimately affect everyone, it’s developers and IT professionals that are the focus right now.
Here are some other quick points from the keynote: Continued »