I had a fascinating phone conversation with Executive Editor Ed Scannell at TechTarget yesterday, who clued me into one very interesting facet of his recent trip to Redmond for a sneak preview of the coming version of Windows Server (which everybody is calling Windows Server 8 these days, even if it will undoubtedly be called Windows Server 2012 by the time it ships). Amidst our discussion of interesting rumors and suppositions–such as, for example, the notion that Windows 8 desktop may precede the release of Windows 8 Server by as much as six month, with perhaps an early release for ARM and a later x86 release to jump on smartphone and tablet opportunities–he revealed to me that Windows 8 will include a facility known as “Cluster Aware Update” which works with an eponymous wizard called the Cluster Aware Update Wizard, or CAUW (I want to pronounce this “cow”, but the word on the street is that this is “not cow”).
Apparently, there was some loose talk about how this tool could make Patch Tuesday updates a thing of the past for server administrators, and also allow Microsoft to push updates on an as-needed basis. “Bully for Microsoft!” Ed and I agreed, but most enterprises aren’t going to pick up and run with Microsoft patches without testing them outside the production environment first, and then only rolling those that pass testing and verification out into production during carefully scheduled update windows for that purpose. I’m not sure how this is supposed to end Patch Tuesday, but Ed has commissioned me to write a 600-700 word article on this impending technology for submission later this month. I look forward to reporting further on this as I learn more. In the meantime, chew on this cool blog from System Center guru Robert Smit (a Server Cluster MVP for Microsoft) which even has screenshots of the CAUW at work: “Windows 8 Cluster Update #CAUW ( Cluster Aware Update Wizard ).” Way cool!
Die-hard MS pre-release technology consumers already know that CTP is shorthand for “Community Technology Preview,” a pre-beta technology release stage that usually seeks to help developers become familiar with and prepare for important coming Microsoft technologies. Timed to coincide with the Developer Preview for Windows 8 (which shares a common code base with Windows Server, lest anybody forget that scripting languages are more important for servers than desktops, by and large), there’s a CTP for Windows Management Framework 3.0 now available, and it includes a reworked version 3.0 of the Windows PowerShell scripting language that offers some pretty cool new features and capabilities.
There’s a nice post to the Windows PowerShell Blog entitled “Windows Management Framework 3.0 Community Technology Preview (CTP) #1 Available for Download” that explains all this stuff in more detail, but here are some high points of what the latest iteration of PowerShell 3.0 is going to deliver:
- Support for workflows that can “perform complex, large management tasks such as multi-machine application provisioning.” Also “Windows PowerShell workflows are repeatable, parallelizable, interruptible, and recoverable.”
- Support for robust sessions, means that PowerShell sessions can “…automatically recover from network failures and interruptions…” and more.
- Simplified language syntax for PowerShell 3.0 makes “…commands and scripts look a lot less like code and a lot more like natural language.”
- Extensions to support cmdlet discovery, automatic module loading, and a new Show command that provides easy methods for users to find and use cmdlets properly
You’ll also find changes to the Windows Management Framework (aka WMI), including a new provider development model that makes it easier to build service providers, and extends management applications running on Windows outside the Windows umbrella. Likewise, remote management (WinRM) has also been extended to include more robust and resilient client sessions, for more comprehensive and robust remote management facilities. There’s even a PowerShell Web Service that offers remote access for calling cmdlets from Windows or non-Windows clients. Way cool!
Wow! There’s a fascinating new post on the Building Windows 8 blog from Monday (9/26/2011). It’s entitled “Signing in to Windows 88 with a Windows Live ID,” and it explains how Windows 8 users can employ a Windows Live ID as the OS log-in so they can synchronize application settings, preferences, and “environment stuff” across multiple Windows 8 machines and devices.
In a nutshell things work as explained in this bulleted list that I lifted directly from the Live ID blog I just cited:
- Associate the most commonly used Windows settings with your user account. Saved settings are available when you sign in to your account on any Windows 8 PC. Your PC will be set up just the way you are used to!
- Easily reacquire your Metro style apps on multiple Windows 8 PCs. The app’s settings and last-used state persist across all your Windows 8 PCs.
- Save sign-in credentials for the different apps and websites you use and easily get back into them without having to enter credentials every time.
- Automatically sign in to apps and services that use Windows Live ID for authentication.
The key phrase that impelled my blog title appears a little later in this post, is in a later paragraph that explains in more detail what kinds of settings are captured and preserved (“… your lock screen picture, desktop background, user tile, browser favorites and history, spell check dictionaries, Explorer settings, mouse settings, and accessibility settings, among many others…”). It says that all of these things “… are now associated with your Windows 8 account and stored in the cloud.” But alas, this capability apparently applies only to Metro style apps because the blog further states: “If you want to roam your settings for desktop apps then you can continue to use the mechanisms available for roaming profiles and client side caching of files available with Active Directory and Windows Server.”
Dang! I see the emergence of a two-tier system here, and a powerful impetus to move users onto Metro style applications. It will be VERY interesting to see how this all plays out, and to see if application developers take the bait and start building Metro style interfaces for most common and popular applications. Only time will tell, but this should be fascinating to watch and learn from.
OK, so I’ve been blogging lately about a bunch of steps to migrate from a conventional notebook PC hard disk to an SSD replacement. I’m starting that process today on my Alienware M11x notebook, a powerful but compact Dell unit I purchased a couple of months ago. This blog post is the first of a two-part series that will cover my further learning and experience as I work through my recommended motions to see what happens, and to report on additional learning along the way, and any potential gotchas I might also happen to encounter. This posting is the “Before” part, wherein I’ll provide snapshot information about the system in question and the state of the conventional hard drive before I fire up the process to get things going. Here’s a professional photo of the system in question (which is a very sweet little notebook PC):
Side view of the Alienware/Dell M11x notebook PC
I’m going to tweet my activities and progress as I work through the migration process, and then summarize what I record to file Part 2 later this week. But for now, here’s what I see when I poke around on this system:
|AlienWare M11X Hardware Summary (Before)|
|CPU||Intel i7-2617M (1.6 GHz)|
|RAM||2×4 GB DDR3-1333 Samsung|
|Video||Intel 3000 Graphics processor/
Nvidia GeForce GT 540M (2 GB)
|HD||Seagate ST98400423AS 500GB (7200 RPM)|
Hard Disk/OS Summary
|AlienWare M11X Hard Disk/OS Summary (Before)|
|System Drive||40.7 GB Consumed|
|OS||Windows 7 Ultimate x64|
|VSS Space allocated||9.15 GB (2% reserved)|
|Windows folder size||14.9 GB|
|Windows Experience HD||5.7 (Seagate 500GB HD)|
Because I’m going with a nominal 120GB SSD on this machine, I don’t actually have to reduce the drive footprint to get anything to fit. But I’m going through the motions anyway, to report on my findings and get more practice, and also to document the entire migration process along the way. Stay tuned for my next report on this on Wednesday morning.
Microsoft’s UEFI boot sequence diagram from “Building Windows 8″ blog post
The BW8 (Building Windows 8 ) blog struck gold again yesterday (9/22/2011) with a new post entitled “Protecting the pre-OS environment with UEFI,” a discussion of how the Unified Extensible Firmware Interface creates a secure boot environment for PCs modern enough to include this latest-generation collection of “…chipset, hardware, system, firmware, and operating system…” components in their makeup.
I’ve been curious about UEFI for a long time now, having read about it in numerous books, articles, and discussions of PC architecture and BIOS replacement technologies. This blog post goes a long way toward filling in the gaps in my knowledge base, and can probably do the same for you, in explaining what UEFI is, how it works, and how it helps to define a firmware validation process better known as “secure boot.” The big issue is that before the OS loads, older BIOS based systems can be hijacked by malicious boot loader programs that work outside security coverage built into an operating system or antimalware software, simply because this permits malware to take up residence in a system before any of these protective or palliative measures can be brought to bear on the security situation.
The only problem here is that motherboard makers for desktop or notebook PCs have been slow to release UEFI-based systems (MSI demo’ed an X79 motherboard with UEFI at IDF on September 19, 2011; and AMI announced its supportfor the UEFI BIOS at the recent MS BUILD conference on September 15, 2011). I don’t think we’re going to see widespread desktop/notebook support for this technology until 2012, but of course that means Windows 8 will be able to support it–but only on systems new enough to include built-in UEFI. If you ask me this strikes another interesting blow at the notion that “any system that runs Windows 7 can also run Windows 8″ that MS has bruited about from time to time. Given the recent Hyper-V disclosures (which require SLAT support in the processor to run the hypervisor) and now this, it looks like that while older Windows 7 PCs may be able to run Windows 8, they will most assuredly not be able to take advantage of some of its most interesting and advanced features.
To learn more about UEFI, check out the BW8 blog link at the head of this post. You may also want to consult the following resources as well:
“UEFI-Just How Important It Really Is” (Hardwaresecrets.com, 9/21/2011)
“Unified Extensible Firmware Interface” (Wikipedia, references materials dated as recently as 9/20/2011, and includes a great “External Links” section with pointers to other references)
I’m in the process of switching all my production notebook PCs over from conventional hard disks to SSDs, thanks to a recent sale at Newegg that put the latest 120 GB OCZ Agility 3 units (SATA 2/3, SandForce-based, with astonishing read/write data rates for SATA 3 in excess of 500 Mbps) for under $170 a pop. To that end, I’ve been noodling on various techniques for reducing the Windows 7 footprint to move from 250-500 GB HDs to a 120 GB SSD replacement. I’ve been blogging like crazy on this topic over at EdTittel.com, so I’ll share some pointers and observations here, and make some recommendations for similarly-minded IT professionals and computing enthusiasts:
- You can use a nice CodePlex tool called DriverStore Explorer to identify and delete obsolete or duplicate drivers from the Windows 7 C:\Windows\System32\DriverStore\FileRepository directory. This has never failed to deliver less than a 1 GB disk space saving on all of the systems I’ve tried it on, notebook and desktop systems alike. See my blog “Another Nice System Drive Cleanup Maneuver: DriverStore Explorer.”
- If you’re running VMs on a PC, you can easily relocate the huge virtual hard drive (VHD) files from your system disk to a different hard disk, even a plug-in portable USB (preferably USB 3) hard disk, if your notebook supports that technology. Read more about this at “More Noodling on System Drive Space-Saving: Move those VHDs!” I have yet to save less than 4 GB on any VHD I’ve relocated, sometimes a great deal more space than that, in fact.
- If you run Outlook without storing messages via an Exchange server, you’ll have two or more large PST files to deal with by default–namely, Outlook.pst and Archive.pst. I’ve learned some interesting tools and techniques for repair and compaction of such files, covered in my posting entitled “Interesting tips and tweaks for PST file cleanup & optimization.” Check it out: on heavily used systems these techniques usually deliver 2-4 GB of space savings!
- Be sure also to check your restore point allocation for the system system: Click the Configure button in the Protection Settings pane on the System Protection tab of the System Properties windows (launch this by clicking Start, right-clicking Computer, then properties, then System Protection), to manage your Disk Space Usage setting. My laptops with 250 or 500 GB drives routinely allocated 10 GB or more to this use, and I trim to under 4 GB for a 120 GB SSD. More big space savings that way.
- Download and run Piriform’s CCleaner on your notebook or PC hard disk before you do your migration process. There’s absolutely no point in copying over stuff that needn’t be on your drive in the first place. You should also grab the SourceForge software package WinDirStat and use it to zero in on large files and directories on your system to see if there’s stuff you can move or delete before imaging over the hard disk contents to an SSD. YMMV will vary on space savings realized.
- Look over your applications carefully before you migrate, too–I use Revo Uninstaller to clean apps off of my systems–because you can save on disk space by eliminating unnecessary applications. On new PCs or laptops, I’ll often use the PC Decrapifier to get rid of pre-installed programs I don’t want or will never use from my machines. You may find it helpful, too. YMMV will vary on space savings realized here, too.
If you have any other space-saving tips for Windows 7 machines, please share them as comments on this blog post. If I get any real doozies, I’ll write them up and give you credit for inspiration, too! Thanks in advance.
So now that I’ve been introduced to Second Level Address Translation (SLAT) as a requirement for Hyper-V on the upcoming release of Windows 8 (see my recent blog on this discovery at edtittel.com: “Be Prepared for Windows 8 Hyper-V Gotcha“) I’ve been wondering about why it has to be this way. So I dived into the newly-created Windows 8 Developer Preview MSDN forums to see what I could learn there, and in the context of a discussion entitled “Hyper-V cannot be installed on Lenovo Thinkpad W500” I found a lovely source of enlightenment on this topic. While it is technical enough to make some eyes bleed, it does provide a key insight into the SLAT requirement.
Microsoft\’s Virtualization Guy Explains Hyper-V and SLAT
This source is Ben Armstrong’s blog from waaaaaaaaaaaay back in November 2009 entitled “Understanding High-End Video Performance Issues with Hyper-V.” In a very tight nutshell, the upshot of this posting may be summarized as follows:
1. High end graphics cards require frequent and resource-intensive/expensive memory allocations that come from using a PAGE_WRITECOMBINEprotection operation in the hypervisor.
2. This requires the kernel memory manager to flush the translation lookaside buffer and the page cache, which in turn sends an intercept into the hypervisor.
3. Lots of resource intensive hypervisor operations like this turn a (normally) infrequent operation into something frequent enough to hamper or cripple performance. There’s a great diagram in the blog post that shows why a hypervisor falls prey to this limitation, while a virtual machine manager (VMM) like that found in Virtual PC or Virtual Server remains immune to the problem. And interestingly, the faster and more capable the video card in a PC the more likely it is to fall prey to this problem.
One fix mentioned in Armstrong’s blog reads as follows “Get a system with Second Level Address Translation (SLAT).” This lets the hardware handle multiple translation lookaside buffers, on a one-per-VM basis (which is just what’s needed to sidestep the potential performance bottleneck that could otherwise occur). It looks like Microsoft simply opted to avoid potential performance problems from older hardware that might otherwise experience significant delays on the desktop to bypass potential customer complaints. In light of Armstrong’s admonitions to use the SVGA driver, choose a low-end graphics card, or turn off advanced graphics features, I find this decision “interesting” (in the sense of the Chinese curse) but also eminently understandable and defensible. But I still sigh to think of my otherwise very capable quad core Yorkfield processors (QX9650s, in fact) being consigned to the scrap heap of history (or turned into hand-me-downs) because they can’t run Windows 8 Hyper-V.
OK, I admit it: I’m dangerous when bored. Earlier this week, I was stuck in an all-day conference where I started to lose both interest and focus. As is sometimes my wont when that happens, I fired up DriverAgent on my HP dv6t notebook PC. No sooner fired up and run than I found a new driver for the unit’s built-in Nvidia GeForce GT 320 M graphics adapter. Here’s what Gabe Topala’s excellent SIW (System Information for Windows) has to say about that component:
SIW provides the current details on my notebook graphics situation
At first, I turned up the Verde 280.26 driver (which is what Nvidia still recommends for this chipset on their site) but when I installed it on my machine, Windows 7 presented me with a black screen the next time I rebooted, which immediately told me this graphics driver and my particular configuration weren’t suited to each other. I rebooted again in safe mode using HP’s equivalent of the “press-and-hold F8″ maneuver immediately after BIOS boot complete to get a generic VGA driver that would actually show me something on the screen.
Next, I launched Control Panel, and took advantage of Device Manager’s “Roll Back Driver” button on the Driver tab in the Properties window for the affected device. Luckily for me, this worked like a charm and my system was working again after one more reboot to switch over to the previous driver version.
The Roll Back Driver button can occasionally be a real life-saver
This also got me to thinking about what I would have had to do to get back up and running if there hadn’t been a driver to roll back, or the rollback effort had failed. My quickest fix would be to try the Last Known Good Configuration for the system (another boot option in the F8 menu). Next, I would download a known good working driver to a USB stick, then install that driver in Safe Mode, and try again. I’m pretty sure either one or the other (if not both) of these approaches would restore the unit to proper operation. As it was in my case, I’m pretty sure that none of my meeting colleagues noticed that anything was amiss with my system: it took less than five minutes to set things right.
Eventually I tried the beta 285.27 graphics driver in this machine, and I’m happy to report it’s working just fine. This graphics chipset is something of a wimp, but using MSI Afterburner with the settings turned up as much as I dare, it works OK for me.
Rats! I’m working out of town this week, and when I tried to grab the “new Windows Dev Center” Web page yesterday, it wasn’t up and running just yet. Now, I just tried again and there it is! The Developer Preview versions of Windows 8 builds (including both check builds with debugging code inserted, and unchecked builds for performance testing and just plain fooling around) are available for download. I won’t be able to grab and go anywhere with this stuff until I get back home tomorrow afternoon, but don’t let me stop you from beating me to that punch. Enjoy!
[Added 10 AM 9/14/2011] See this Windows 8 Developer Preview Hands-On article from Laptop Magazine for a great overview of the UI, plus new features and functions. A very nice piece of work, but too short on screen caps and illustrations; however, given the speed of its production, I totally understand why this might be the case. Read it to get a pretty good sense of what you’ll find in this Alpha version of the Windows 8 OS.
Apexwm is an active but anonymous IT professional who posts all over UK forums, and who also blogs prolifically for ZDNet in the UK. He recently posted a fascinating item entitled “Windows 7 driver signing conundrum” wherein he recounts a vexacious “chicken-and-egg” problem. The issue has to do with trying to slipstream a driver for a particular device into a Windows 7 image — in this case, an integrated wireless network adapter from RALink Technology — when the manufacturer only makes an Installshield program available to install the driver, but which must be installed manually after Windows itself is installed.
In attempting to automate the install, the author used a test machine to get the list of necessary files from Driver Details in Device Manager, and also grabbed the related oem*.inf file from the C:\Windows\inf folder to complete the collection of items to attempt an automated and unattended install. Imagine his frustration when this effort produces the following error message:
Windows cannot verify the digital signatures for the drivers required by this device. Error Code 52.
The drivers are in fact signed, and the problem is apparently well-documented all over the Internet for drivers of all kinds. But alas, no easy fix is available, without turning to 3rd party software products to remedy this known Windows defect. I’m sure apexwm isn’t the only Windows admin who would voice his sentiments on this approach “No thanks. I’m not about to start installing a bunch of unknown 3rd party products to try and help with a Windows problem.”
If I were in those shoes, however, I would try to take advantage of Windows’ ability to run post-install scripts after the initial installation process completes (which is how additional common applications such as Office, 7-Zip, FileZilla, and so forth, often occur at the tail end of automated installation processes). It seems to me that if the driver uses an InstallShield .exe file, there should be some way to script or automate its installation as a post-install task.
I do get apexwm’s complaint that Linux/Unix does a much better job of integrating drivers into its kernel directly, and that compilation into the kernel is an option for those few odd drivers that aren’t already included under this umbrella. But I’m a member of the “where there’s a will, there’s also a way” club of Windows-heads and suggest that he needn’t have given up in defeat.