October 17, 2011 11:35 AM
Posted by: Ed Scannell
, sql server 2012
We have seen it more than a few times before: Microsoft jumping into a new market already forged by many of its enterprise competitors, hoping to deliver lower-cost, alternative solutions. This time, it is Microsoft making a big deal out of big data.
At its PASS Summit this week Microsoft said it would deliver an implementation of Apache Hadoop for Windows Server and Azure sometime in 2012, with the help of Hortonworks, a Yahoo spinoff. In concert with this plan, the company said it would wire up SQL Server 2012 (“Denali”) to work with Hadoop as well. Hadoop is an open source framework that allows for the distributed processing of large data sets among clusters of computers.
The moves make sense given Microsoft’s renewed strategic ambitions to go after the upper reaches of corporate IT. Last month the company showed off Windows Server 8, which is chock full of enterprise-class cloud, virtualization, clustering and storage and systems management capabilities to handle dozens of servers as if they were one system.
To me, Windows Server 8 looks like a pretty solid foundation on which to build enterprise-class infrastructure and applications to let Fortune 500 shops do sophisticated analytics on massive amounts of structure and, more importantly, unstructured data. Microsoft, this time, seems intent on lining up the pieces needed to play with big data’s big boys.
Those big boys, IBM, Oracle, EMC, SAP and even Hewlett Packard, have already started delivering individual products and/or solutions to help corporate shops better control and extract meaning from the flood of information contained in social media feeds, e-mails and documents.
These are companies that have long catered to larger IT shops that have in turn sunk billions into buying products and services they rely on to be successful. These shops will be rather reluctant to toss out even smaller technology pieces dedicated to handling big data, in favor of technologies from Microsoft, which many IT shops wouldn’t mention in the same breathe with the words “enterprise class.”
But then, this is Microsoft we are talking about. A company that succeeded in markets where initially it had to come from way back in the pack: CRM, SharePoint, Exchange and the Xbox (although not too many enterprise accounts care about the latter – not during office hours anyway).
It’s long established that it take Microsoft three tries to succeed with many of the core products that flourish today. I am not sure Microsoft has the luxury to fail a couple of times before it can break into the big data market, but that depends on how effectively the other big data boys can maintain their technology lead.
Another issue, particularly among loyal Microsoft’s users in the smaller shops: why spend so much sweat and treasure pursuing yet-to-be realized opportunities in new, higher-end markets when they could be better serving core SMB customers with more effective solutions?
This could be a legitimate question Microsoft should answer. Do you think Microsoft should be aggressively pursuing big data opportunities or should it remain focused on better serving the needs of smaller enterprises? Let me know: firstname.lastname@example.org.
October 14, 2011 3:48 PM
Posted by: Ben Rubenstein
, Michael Dell
, Steve Ballmer
, Windows 8
, Windows Server 8
By Bridget Botelho, Senior Site Editor
AUSTIN, TX – Dell’s first worldwide show here gave IT pros a lineup of the who’s who of the IT industry– Michael Dell, VMware’s CEO Paul Maritz, Salesforce.com’s CEO Marc Benioff and Microsoft’s CEO Steve Ballmer. And all of these power players want to change the way IT works.
They all say cloud is the next evolution of IT and that IT pros have to move forward towards the cloud or be left behind. (Of course, these companies all have cloud products, so they all have a lot to lose if enterprise IT doesn’t take the bait).
While IT pros are clearly interested in what cloud computing has to offer, they wanted to hear about the technologies they use today to do their jobs. That’s what IT uses Microsoft products for, and that’s what they want the company to focus on.
As one Virginia-based IT administrator here at the show said, “People don’t buy Microsoft software because it’s cool. They buy it for the same reason they buy a Chevy truck,” he said. “Because you have a pile of [stuff] in the back yard that you need to move around and you know that they can do the heavy lifting.”
Ballmer gave a keynote Friday morning and provided a glimpse at Windows 8 for desktops and servers along with demos of some previously reported new features in those operating systems, such as Live Migration enhancements for Hyper-V 3.0.
He said the Windows 8 Preview versions are really just a flavor of the next generation OSes. He did not say when the beta version will be available, but both the server and desktop versions are expected to be generally available next year.
In the mean time, IT pros will have to get used to “Windows re-imagined” and its Metro-style applications. It will require some re-learning of how IT has done things for decades.
One Louisiana-based IT services provider who is familiar with the Windows Server 8 preview said the touch interface for servers is a tough adjustment for IT. “I’m used to touch interface for devices, but for servers?” he said. “We just got comfortable with the ribbon interface for servers and now it’s changing again.”
Ballmer is well aware that “Metro Style” touch-enabled apps is outside of administrators comfort zones, but basically said, too bad.
“People complain that [Windows] changes too fast,” Ballmer said Friday. “You shouldn’t have picked the technology business if you weren’t willing to embrace change.”
While Microsoft shows off its stylish touch interface at tech conferences like Dell World, what IT pros really want to see from Microsoft is that it does what it is supposed to do and it does it fast. So as long as Windows Server 8 does a much better job than previous versions of the operating system and brings with it features that help IT pros do their everyday jobs, they will adjust just fine.
October 7, 2011 3:03 PM
Posted by: Ben Rubenstein
, System Center
, System Center Configuration Manager
, Windows Azure
, Windows Server
, Windows Server 8
You want to know what’s happening with the world of Windows Server administration, but you don’t have the time to sift through your RSS and Twitter feeds to find the truly valuable content. Well, here we are to do it for you. Every week, we’ll curate the stories from the top sources and deliver them in a neat package – consider us your one-stop shop for Windows Server-related news.
Top stories this week:
Fueling the Microsoft cloud fire at Interop
During his keynote presentation at the annual Interop conference in New York this week, Microsoft Server and Tools VP Robert Wahbe continued the company’s cloud push, calling cloud platform Windows Azure “the next generation of operating systems.” While the focus was on Windows 8 integration, InformationWeek also reports that Windows Server 8 would offer a path to the private cloud if enterprises prefer to go that route.
Read more about the Windows Server and Windows Azure pairing
More power to PowerShell
If you’ve been following the news about Windows Server 8, you’ve probably heard that Microsoft is pushing command-line scripting as an increasingly effective – and perhaps the only – way to manage server networks. So the Community Technology Preview of PowerShell 3.0 is big news. Admins will be able to play with over 2,300 cmdlets and test the more complex workflow support and simplified language syntax, among other updates.
Need to learn PowerShell? Go back to Scripting School.
Microsoft opens its arms to another virtualization partner
Cloud management platform OpenNebula is the latest product to support Hyper-V, with a joint release set for October. ArsTechnica notes that the platform will support both Windows Server 2008 and Windows Server 2008 R2 SP1 (and presumably Windows Server 8, when it is released), while ReadWriteWeb touts the addition of access control lists (ACLs) in the latest version.
Other recently announced Hyper-V partners: Cisco, NEC, Broadcom
Here comes System Center 2012
Perhaps inspired by the wildly successful Building Windows 8 blog, Microsoft has launched the System Center Configuration Manager 2012 Blog Series, which will cover the new features of SCCM 2012 and, hopefully, be a “two-way dialogue” regarding tools and implementation. Will there be similar blogs for the other upcoming System Center releases?
More on System Center: A look at the SCCM 2012 Beta, SCVMM 2012 at a glance
September 8, 2011 3:38 PM
Posted by: Frank Ohlhorst
, Windows Server
, Windows Server 8
Microsoft has been ramping up the hype machine, looking to promote Windows 8 (server and client) as the next operating systems of choice for both the desktop and the enterprise.
Facts about both systems should become available on Sept.15, when various NDAs are lifted, with analysts, the press and others that will convey Microsoft’s messaging to the general public.
So far, Microsoft has said Windows Server 8 has 100 or so new features, but what proves to be most interesting is the integration and evolution of Hyper-V, which sports a new feature called Hyper-V Replica. Microsoft has said that new feature provides “asynchronous, application-consistent virtual machine replication.” At a recent Windows 8′s partial unveiling, Microsoft demonstrated a mission-critical SQL VM being replicated from a private cloud to the offsite data center with a few clicks.
Microsoft also showed Hyper-V supporting 16 virtual processors per VM, four times as many as are supported currently, and Microsoft didn’t say that 16 was the limit. Microsoft has made bold promises about huge scalability that would support a private cloud strategy by IT. Other details are slowly filtering out via Windows chief Steven Sinofsky’s Building Windows 8 blog, where one of the first posts says, “Windows 8 reimagines Windows.”
So what’s next? Microsoft is hosting its press reviewers’ workshop Sept. 8-10 and its BUILD event on Sept. 13-16, so more details are sure to be forthcoming. Rumor has it that Microsoft will lift the veil of secrecy and set the hype machine to max power.
That’s the good news, but everyone is still itching to know when we will see an actual release. One Microsoft VP, Dani Lewin, hinted at an autumn 2012 release, whereas others have reported a potential April 2012 release. Judging from Microsoft’s track record for past release dates, it’s still anyone’s guess at this point.
September 6, 2011 7:19 PM
Posted by: Ben Rubenstein
, Windows 8
, Windows Server
, Windows Server 8
Microsoft kicks off its sold-out BUILD Conference in Anaheim next week with…well, we’re not sure exactly with what. The company has been particularly stingy with details on the event, failing to provide an agenda or even official list of speakers. So the rumor mill has been working overtime, with conjecture about what might be in store (ARM-based tablet giveaways, anyone?)
This much we know: Windows 8 will be revealed in some form, along with further details on Windows Server 8 following this week’s Reviewer’s Workshop in Seattle. Some specifics have emerged about both products, especially the client OS; the Microsoft team’s Building Windows 8 blog has provided a starting point for conversation about new features and strategies that will affect developers and administrators alike. With that in mind, here are some topics we do expect to come up at BUILD.
Is Windows 8 a game-changer, or just another Microsoft tease? The week should provide some insight to that question, particularly if a public beta release is offered. Some themes to watch for:
Taking on the tablets
Microsoft aims to compete in the tablet market with Windows 8. The new, optional touch-centric interface (which builds upon the Metro UI seen on Windows Phone 7) will purportedly scale across all devices and platforms, from phones on up. Rumored support for low-power ARM-based processors is also based on a desire to meet tablet demand.
Apps battle brewing
Microsoft confirmed that Hyper-V, previously limited to the server product, will come to Windows 8, along with VHDX, a new virtual hard drive format that allows for up to 16 TB of data. This brings up opportunities and questions alike for administrators, particularly those currently using VMware for virtualization. It could also bring MinWin, a stripped-down version of Windows, into the spotlight as an enabler for creating virtual appliances.
The Building Windows 8 blog has been fast and furious with updates relating to features in the new OS, including a ribbon-based Windows Explorer interface, ability to ISO and VHD file access, USB 3.0 support, and improved file management. These are not the sexiest offerings, so you can bet there will be additional reveals as the BUILD week goes on.
Windows Server 8
It’s inevitable that the server product would be stuck in the shadow of its client-facing sibling, but many BUILD attendees will be eager for information on Windows Server 8. Of the “100 or more new features” that were alluded to at the Worldwide Partner Conference in July, only a few have really been explored relating to Hyper-V Replica.
Sweet 16…or more?
As demonstrated at WPC 2011, admins will be able to manage 16 virtual processors per machine in this new version – which is apparently not the limit. But what is?
With its asynchronous virtual replication feature, Replica offers the ability to specify replication targets and snapshot intervals. This could have major implications for reducing server loads and increasing scalability.
Microsoft has touted its ‘private cloud’ solutions of late – with Hyper-V and System Center the products to make it happen. Expect this to be a consistent talking point throughout the week, prompting questions about application development as well as pricing and security risks.
But wait, there’s more…
- Azure. Several evangelists of Microsoft’s public cloud platform have been confirmed as BUILD speakers, and the company recently released an Azure toolkit for the mobile Android platform.
- Visual Studio 2012. What does the new version of Microsoft’s integrated development environment have in store?
What are you looking forward to hearing about at BUILD? Has the lack of information heightened your anticipation or lowered your expectations? Share your thoughts and predictions in the comments section, and look for our coverage from the event next week.
August 8, 2011 2:58 PM
Posted by: Frank Ohlhorst
Does anyone really know how much server virtualization costs?
I’m not talking about the cost of the hardware, or the management overhead, or any of the ROI or TCO elements: just the straight up cost of buying the software that makes server virtualization work.
Microsoft implies that virtualization is free because it comes bundled with Windows Server 2008 R2, and with the upcoming Windows Server 8 where virtualization figures to play an even bigger role. But it is not really free if you have to pay for it, right? With Microsoft you have to invest in Windows Server along with the costs attached to associated technologies. But if you are buying Microsoft Windows licenses anyway, then Hyper-V at least comes across as free – or does it? And why does this matter?
Simply put, VMware is changing the licensing game with vSphere 5 and those licensing changes may make Hyper-V and other hypervisor offerings more attractive to cost conscious IT managers. These changes also change how vSphere is purchased. In some ways, it simplifies the licensing and in others it nakedly shows that VMware is seeking to increase revenue.
With vSphere 5, pricing is no longer based on physical memory; pricing has changed to reflect virtual memory. What’s more, VMware has switched to a licensing model that incorporates three tiers: Standard, which allows a total of 24 GB to be allocated to all virtual machines, and up to eight virtual CPUs per VM; Enterprise, which allows a total of 32 GB across all VMs and up to eight virtual CPUs per VM; and Enterprise Plus, which allows 48 GB across all VMs, and up to 32 virtual CPUs per VM. The prices are still based on single-socket, but there’s no longer a limit on the number of cores per socket.
This adds up to a significant change in licensing terms. Version 4 of vSphere encouraged a “scale-up” approach, which means buying systems with a few CPU sockets, massive amounts of memory and running lots of virtual machines on them. This, as opposed to a “scale out” approach where IT managers purchase more servers each with much less RAM.
The reason was because adding physical memory was “free,” at least until you hit 256 GB. But adding sockets (or new servers) costs money. The result was enterprises using VMware on two-socket servers with hundreds of gigabytes of RAM.
With Version 5 of VMware, the scale-up model is penalized. A case in point: A two-socket, six-core per socket, 256 GB machine used to require two Enterprise licenses. But now the price changes are based on all of the RAM, so you now need eight Enterprise licenses. Simply put, vSphere 5 will need four times as many licenses, and cost four times as much as Version 4.
That makes free technologies such as Hyper-V all that much more attractive. The real question is: will IT abandon vSphere in favor of Hyper-V? Probably not. If a company already has invested in vSphere, they will probably swallow the price changes and stick with what they know. On the other hand, when Windows Server 8 hits the streets, companies looking to migrate may seriously consider going with Hyper-V.
Only time will tell. However, it certainly doesn’t hurt to take a closer look at Hyper-V to see if it really offers something for nothing.
Frank Ohlhorst is an award-winning technology journalist, professional speaker and IT business consultant with over 25 years of experience in the technology arena. He has written for several technology publications, including TechTarget, ComputerWorld and PCWorld. Ohlhorst was also the Executive Technology Editor for Ziff Davis Enterprise’s eWeek. You can contact him at email@example.com
August 1, 2011 3:30 PM
Posted by: Ben Rubenstein
, System Center Operations Manager
Microsoft last week officially released the latest beta version of System Center Operations Manager, its health and performance monitoring product. System Center Operations Manager 2012 beta is now available for download. According to the accompanying Microsoft FAQ, SCOM 2012 offers
· A single, consistent view across datacenters and clouds with an option for customizable dashboard templates;
· AVIcode technology for monitoring .NET applications;
· Support for monitoring heterogeneous environments including Windows, Linux, and UNIX servers;
· Integrated network device monitoring and alerts;
· A simplified management structure with support for automatic failover.
Eager system administrators and bloggers are wasting little time testing these new features looking for the soft spots. One such tester expressed some disappointment that the latest release doesn’t feel significantly different from SCOM 2007. “I had expected an updated console and vastly improved notifications management, but as far as I can see, a lot of the code is the same,” said Trond Hindenes, a senior consultant at AVAN in Norway. “This upgrade might not as be as ‘no-brainer’ as I had hoped.”
Still, Hindenes notes some positive updates to SCOM 2012, including the addition of deep monitoring of .NET-based Web applications thanks to Microsoft’s acquisition of AVIcode. In a recent post for GotchaHunter, Alex Shlega writes that while deployment of.NET monitoring is now easier, the limited server-side configuration options make “application deep-dive troubleshooting” a difficult proposal. In response, Daniele Muscetta of Microsoft tweeted that more options will be visible in the RC version.
Another significant change in SCOM 2012 is there is no longer a root management server like there was in Operations Manager 2007. Instead, all servers are peers. Hindenes suggests this will mean better scalability, enabling “easier high-availability configuration without configuring Windows clustering for the SCOM infrastructure.” Microsoft MVP Graham Davies offers more background on the change on the SystemCenterSolutions.com blog.
The management server change is one of several “little things that make [SCOM 2012] more scalable and reliable,” Davies noted in a recent phone interview. Still, while features like new dashboards, network device monitoring and the ability to monitor non-Windows environments are welcome, “they don’t make for a revolutionary release like the jump from MOM 2005 to SCOM 2007,” Davies said, warning that companies with legacy systems must be careful when planning for such a transition, as it is important to ensure that their existing environments meet the new supported configurations. For example, “there are over 80 new PowerShell cmdlets, so while existing scripts will work on SCOM 2012, enterprises might want to upgrade them to gain the new functionality. Additionally, there is no support for Windows 2000 agents and the integrated AVIcode (now APM) monitoring only supports web applications on IIS7.”
SCOM 2012 may look significantly different in its final form (RTM is estimated for the latter half of 2012), and we’ll plan to post updates as that release becomes more clear.
Let us know what you think about this story; email Ben Rubenstein at firstname.lastname@example.org.
July 21, 2011 5:34 PM
Posted by: Ed Scannell
, Windows Server 8
Microsoft finally dragged Windows Server 8 out into the light at its Worldwide Partner Conference last week, offering up a brief demo and showing off a few of the “100 or more” new features expected to be in the product. What was clear from this glimpse is that Microsoft will place a heavier emphasis on virtualization in the new release.
For instance, it showed off Hyper-V Replica which offers support for at least 16 virtual processors. The new technology also supplies asynchronous virtual machine replication to offsite locations for things such as mission critical data contained in SQL Server databases. This is a direct response to users who have complained about the inability to properly scale the product to handle larger workloads.
Company officials made it a point to say Hyper-V Replica will work with Remote FX and Dynamic Memory, two virtualization capabilities included in Windows Server Service Pack 1 released earlier this year. Windows Server 8 will remain an important building block on which corporate users can build private clouds, company officials noted, which implies virtualization will play a role here, too.
According to Jeff Woolsey, the Principal Program Manager Lead for Windows Server Virtualization who conducted the demo, “Windows Server 8 will be capable of delivering, ‘massive, massive scale’ as well as ‘unlimited replication’ right out of the box in the box,” he said.
Woolsey talked about how Hyper-V Replica can also improve fault tolerance, while side-stepping much of the additional hardware and software costs associated with upgrading to that capability.
While it was good to see the Microsoft at least begin to talk about the product, which Microsoft CEO Steve Ballmer said in his conference keynote now owns 76% of the server market, you have to wonder why it took so long.
According to one Windows Server beta tester I talked to this week, that reluctance may center around not wanting to tip its hand to early to its archrival in the virtualization market, VMware.
“They are making such a big play in (server) virtualization and virtual desktops, they didn’t want to give VMware too much of a head’s up. They are also trying to figure out how to sustain their licensing model as well as take on Citrix and VMware. There is a lot they still have to work through before they roll this out,” he said.
There might be something to that. On the same day Microsoft demoed the Hyper-V Replica-Windows Server 8 combination, VMware trotted out VSphere 5. The new version, perhaps not coincidentally, has with the ability to support virtual machines about four times more powerful than its predecessor, as well as containing improvements to all of its virtualization capabilities.
Unfortunately, VMware also announced a new licensing plan for vSphere 5 that caused a stir among its users. The plan, instead of being based on physical CPUs and physical RAM per server, is based on per-CPU licensing. Many VMware users believe this switch will cost them significantly more.
The new licensing could slow adoption of VMware’s core offerings and, at least for now, tilts the playing field in Microsoft’s favor with its free hypervisor built into Windows Server. But Redmond must be careful not to step on a landmine with a new licensing plan that takes away its cost advantage.
There is no word on when Microsoft might deliver a meaningful code drop of Windows Server 8 to developers. We’ll have to wait until Microsoft’s Build conference in mid-September to get more technical details.
Let us know what you think about this story; email Ed Scannell at email@example.com.
May 25, 2011 8:28 PM
Posted by: Ed Scannell
, Cloud Computing
, Windows Server HPC
Microsoft’s Azure platform has generally received good reviews from many IT shops and third party developers and appears to be gaining mindshare despite a band of fierce competitors including Google, IBM and Amazon.
Redmond officials, who mentioned to me at Tech Ed last week they now have 30,000 Azure customers, can’t afford to take anything for granted. They will have to stay aggressive and focused if they hope to maintain the platform’s momentum in a fast moving market.
The company could get some help sustaining this momentum from an unexpected source — Windows Server HPC 2008 R2. Typically focused on the higher-end technical computing markets, some company officials want to promote and make the platform available to a much broader IT and developer audience. And by getting IT shops developers to deliver a more diverse set of commercial apps for HPC, they believe it can drive higher usage of Azure.
“There are more than a few Fortune companies and developers that can benefit from (HPC’s) parallel and clustering capabilities. These apps would be a natural fit for Azure,” said one Microsoft official who preferred not to be quoted at last week’s show.
A couple of months ago Bill Hilf, the General Manager of Microsoft’s Technical Computing Group, said he believes HPC R2 applications will drive higher usage of Azure across many IT data centers. He went as far as to say that technical computing workloads and other compute intensive applications would prove to be the killer app for Azure.
I’m not sure I would go that far, but it gives you an idea of what Microsoft’s hopes and dreams are for HPC R2 as a general purpose mainframe in the cloud.
Further evidence Microsoft wants to lift HPC R2 out of its niche and into the much bigger cloud arena was its reorganization earlier this month. That reorg moved the HPC R2 team into the Azure organization run by Bill Laing.
Also earlier this month, Microsoft delivered Service Pack 2 for HPC R2 that, not surprisingly given the above evidence, contains several new features pertaining to Azure including the ability to add Azure VM roles to clusters and the ability to add MPI-based jobs on Azure nodes.
Another bridge Microsoft will build to connect HPC R2 and Azure is Dryad — a competing technology to Google’s MapReduce and Apache Hadoop. Dryad helps developers create distributed programs that can be used in both small clusters up to large datacenters. The company hopes to deliver Dryad for HPC R2 by year’s end.It will be interesting to see how many IT shops with a number of compute intensive workloads Microsoft can attract to its cloud strategy using HPC R2 as the incentive.
If you are using Windows Server HPC 2008 R2 in your datacenter to host cloud applications, or just exploring some possibilities, let me know.
Ed Scannell is Executive Editor with SearchDatacenter.com. He can be contacted at firstname.lastname@example.org.