Windows Enterprise Desktop


January 30, 2012  4:02 PM

Windows 8 and ARM processors



Posted by: Ed Tittel
stable Windows 8 ARM code could be released to developers soon, stable Windows 8 developer release for ARM rumored for Feb2012

An interesting rumor popped up on SlashGear this morning, in an article from Chris Davies entitled “Windows 8 on ARM stable release in February tip developers.” It seems that despite earlier issues with the stability of the developer preview for Windows 8 on ARM CPUs, another, much more stable release of Windows 8 will be released in February — again for developers — to coincide with the “customer preview” release of Windows 8 for x86 CPUs that heralds another milestone in the march to RTM (release to manufacturing) and GA (general availability).

Citing sources from CNET, Davies said “…there’s no obvious reason that [an] ARM-vesion of Windows 8 … should be ‘staggered’ from the traditional x86 build.” Furthermore, the same sources opined that “…ARM alternatives to Intel and AMD based Windows 8 machines could ‘undercut them by hundreds of dollars…’” Given the popularity of chips from Texas Instruments and Qualcomm, among others (these two companies showed Windows 8 prototypes at this year’s CES), for smartphones, this could be a huge coup for Microsoft in finally making a dent in that non-PC marketplace.

Should be very interesting to see how all this plays out. Stay tuned!

January 27, 2012  7:17 PM

Intel Driver Update Utility and Intel Chipsets



Posted by: Ed Tittel
Intel Driver Update Utility doesn't always recommend the newest drivers, Intel Driver Update Utility settles for valid rather than current drivers in some cases

The Intel Driver Update Utility is handy but not always completely current
The Intel Driver Update Utility is handy but not always completely current

The Intel Driver Update Utility(IDUU) is a handy-dandy software tool that depends on a “Systems Requirements Lab” active widget from Husdawg to scan PCs for Intel components, and to report on their current update status. It’s pretty useful, and has become part of my normal maintenance routine for checking driver status on the PCs I manage. I also use DriverAgent (though other tools like RadarSync, Driver Detective, Drive Guide, and so forth also do pretty much the same thing) but I’ve observed that Intel is often better at keeping up with its own drivers than third parties, so I’ve come to depend on IDUU to help me keep my Intel drivers as current as can be.

But recently, I noticed something subtle about the language that the IDUU uses to report on drivers it finds, that in turn led me to realize that Intel apparently doesn’t care if a driver is the most current in every case. Rather, it appears to care only that some drivers are “valid,” even when newer drivers may be available. Notice this report block on the Intel Chipset Software Installation Utility:

Now, compare this to the language used for the built-in Realtek RTL8013EL Ethernet GbE adapter on that same motherboard:

In the chipset case, the version is valid, but in the Ethernet adapter’s case the driver is current. “Hmm…” I wondered, “Is there a difference between valid and current?”

The answer turns out to be “Yes.” By researching the most current Series 3 Chipset Driver through a manual search on Intel’s Download pages, I was able to determine that the highest numbered version available was 9.2.3.1022. My machine was happily running version 9.2.3.1016 instead. A quick download and install took care of that issue as the preceding chipset screen capture now attests, but this leaves me wondering why the IDUU doesn’t tell its users that a more current version is available. My best guess is that it waits until some critical feature gets introduced in a newer update, and only then instructs its users to update their drivers. That would be a reasonable approach to driver updates where “if it ain’t broke don’t fix it” often prevails as a guiding principle. I can only imagine that’s why Intel also labels the 9.2.3.1022 version as valid, even though it’s also the most current as well.

But if, like me, you want at least some machines to always be running the absolute latest and greatest Intel drivers, if only for test purposes, it’s good to know that when you see “valid” in this tool you should probably go looking for something more current, just in case it’s out there for downloading.


January 25, 2012  8:55 PM

About Sensors and Windows 8



Posted by: Ed Tittel
basic Windows 8 API makes it easy for manufacturers to include sensors in Windows 8 platforms and for developers to put them to work, Windows 8 offers simple access to sensor data for sophisticated uses

The latest entry in the Building Windows 8  blog is called “Supporting Sensors in Windows 8,” and it comes from Gavin Gear, one of the Microsoft Product Managers on the Windows 8 Device Connectivity team. It tells a fascinating story of how (and why) the basic Windows 8 hardware requirements include so many sensors (an accelerometer, a magnetometer, and a gyroscope, among others) so that application  and service developers can count on basic system capabilities when building next generation software of all kinds. The following diagram helps to illustrate how combining such hardware devices together can support smoother and more capable physical functions, not just for the usual purposes (screen rotation and orientation) but also to support lots of other interesting capabilities as well (including game controllers, smart remotes, measurement and data acquisition, and so forth):

Position and motion sensing devices work better in concert than by themselves

Position and motion sensing devices work better in concert than by themselves

Mr. Gear follows up his explanation and discussion with some data traces from an accelerometer to depict why straightforward use of such data doesn’t always produce the best user experience. From there he goes on to explain how MS has worked with hardware manufacturers and device makers to create a single Microsoft driver that can “…work with all Windows-compatible connected over USB and even lower power busses…” The upshot is that any sensor company can construct a Windows 8 compatible package by adhering to “…public standard USB-IF specifications for compliant device firmware.”

Pretty slick already, but MS also worked to control power consumption and sensor performance as well by enabling sensor processing to occur at the hardware level without involving the CPU, and by building filtering mechanisms to restrict data flow (and event rates) to speeds that won’t exceed the processing stack’s handling capacity (this also helps reduce consumption of CPU cycles).

The blog concludes with a discussion of how sensors can play into Windows 8′s new “Metro-style” apps, using a new sensor API included as part of the Windows 8 runtime environment (aka WinRT). Code examples show the APIs to be simple and straightforward, making them easy for developers to put to work. It should be interesting to see how easy access to sensors and location data helps raise the bar for Windows 8 apps over the years and months ahead. There’s even a product reference to a specialty sensor part built into last year’s Windows 8 Developer Preview slate PC from Samsung, now that developers can purchase such parts on the open market, to help get this phenomenon rolling.


January 23, 2012  2:51 PM

Interesting discussion on fairness and ease for MS cert exams



Posted by: Ed Tittel
MS cert exams continue to change and evolve, MS cert exams deal with issues of fairness and challenge

As part of my daily due diligence in writing this blog, I keep up with the postings on Microsoft’s Born to Learn blog. A recent post (1/17/2012) from Andrew Bettany entitled “Have MSTS/MCITP exams got easier?” caught my eye this morning, as much or more because of the comments to this post as for the post itself. Be sure to check it out if you’ve got a Microsoft exam in your near-term future.

But what really got my wheels turning were the concepts of “fairness” and “challenge” raised in a comment by Microsoft’s lead psychometrician, Liberty Munson, especially in light of my recent brokering of contact between herself, an exam developer, and a friend and colleague who had taken the 70-662 Microsoft Exchange Server 2010, Configuring exam no less than four times without managing to pass it on any attempt. As somebody who’s actively engaged in consulting with major Fortune 1000 companies to help them transition hundreds of thousands of email users into Exchange 2010, I have trouble accepting the notion that he simply doesn’t understand the platform and its capabilities. In that light, consider what Ms. Munson has to say in this posting:

Thanks for great post! It shouldn’t surprise anyone that people who pass exams tend to think they are easy while those that fail believe that they are too hard. The key is to make sure that those who fail also believe that they were “fair assessments” of their skills while those who pass also found them to be “challenging” (or “easy but challenging”). That’s nuance of “easiness” that is often overlooked by candidates and hiring managers. Exams should be “easy” for qualified candidates because we’re assessing skills that they have, but they should also be “challenging” and fair assessments of skills. We rarely get past the “it’s too easy” part of the conversation to understand the deeper nuance…Was it challenging? Was it fair? Does it differentiate you as someone who knows the technology? It’s possible that our exams haven’t been challenging–that this is assumed in the “it’s too easy” statement–but there’s a deeper question here that needs to be asked.

Enough with the philosophical aspects of this conversation :)…over the last year, we’ve made changes to our item writing guidelines and use of item types to increase the percieved difficulty of our exams because we understand that the conversation generally stops at “it’s too easy”…certain item types and item writing strategies can be leveraged to make exams feel more challenging without requiring more skills or knowledge. However, as we introduce new certifications, you’ll find that we’ll be requiring deeper knowledge and higher level skills to address comments that “it’s too easy.” Psychometrically, I regularly monitor the difficulty of the exams and ensure that it’s at an appropriate level for the target audience and programmatic goals/definitions of what it means to hold the credential. Changes are made when and where necessary to meet these targets.

I’d love to start hearing people talk about our exams as “challenging and fair assessments” as we continue to improve the ability of our certifications to differentiate people who really know their stuff from those who don’t or just sorta do. Andrew–Thanks for starting to get the word out!

Clearly, some of the ongoing renovation of existing MS exams and development of new ones incorporates great concepts and intentions on the part of the people who work with SMEs to put those exams together, and to measure and monitor them for statistical relevance to the populations being tested. But “fairness” is a difficult concept to get right, especially when exam questions veer into the realms of what my aggrieved friend and colleague perceived as mostly “administrivia” that real-world admins would simply look up if they had to deal with such seldom-used (or rarely-needed) features and functions. In his case, he’d readily concede that the 70-662 exam is challenging, but also vehemently claims that it’s grossly unfair.

I symthpathize with both parties in this kind of encounter. On the one hand, having created hundreds of practice tests for the certification exams covered in my many Exam Cram and study guide certification books, I get how difficult it is to create sound, meaningful and relevant exam items for candidates to ponder and learn from. On the other hand, I also really feel for my friend’s situation where his consulting company requires him to pass the 70-662 exam so he can present the “right credentials” to his many clients, but where he feels that the exam is more of a random check on obscure or little-used Exchange features and functions, than a meaningful assessment of his skills and knowledge of the platform.

This is a tough situation, and one that requires not only that both sides recognize each other’s motivations and priorities, but also that the testing party provide enough information to testees to let them understand what they must learn to get past the exam, and that testees take this seriously enough to make a concerted effort to learn and master the necessary skills, knowledge, and even administrivia details to get over the hump. Usually, taking a Microsoft Official Curriculum class, reading the books, and acing numerous practice exams is enough to guarantee a good result. But when this tried-and-true method fails, it’s time to start asking why and trying to reason one’s way into the proper information set to figure out what’s wrong. My colleague also believes that MS should provide testees with more detailed feedback about questions answered incorrectly rather than simply flagging certain concept areas where the test-taker failed to meet minimum score requirements.

All this goes to show that certification exams really aren’t easy to create, and sometimes can be incredibly difficult to pass as well. It should be interesting to see how this situation plays out, and how the two parties will find some kind of rapprochement.


January 20, 2012  4:14 PM

Thurrott & Rivera on Windows 8 Logo Requirements



Posted by: Ed Tittel
MS unveils Win8 logo requirements. Win8 logo devices to include lots of interesting and useful features

In his most recent monthly column for WindowsITPro entitled “The Rigor of Windows 8 Hardware Certification Requirements, …” Windowsmeister Paul Thurrott provides a snapshot of the current requirements that Microsoft intends to levy on those seeking to put the Windows 8 logo decal on their PCs, notebooks, tablets, and smartphones. It’s an interesting and detailed collection of info, based in turn on Rafael Rivera’s equally interesting WithinWindows blog entitled “Windows 8 Secrets: PC and Device Requirements.” Both are worth reading in their entirety; here I’ll summarize some of the most interesting high points.

A "mock logo" for Windows 8

A mock logo for Windows 8

 

  • 5-point digitizers: Windows 8 touch PCs must support at least 5 touch points (five fingers’ worth, guaranteeing support for complex multi-touch swipes and gestures).
  • NFC “touch marks”: Near field communications lets devices interact with each other when they’re so close as to be nearly touching, and are key to emerging technologies for payment transfers especially from smartphones to cash registers (or computers acting like same).
  • 5 required hardware buttons: devices must provide buttons for power, rotation lock, the Windows key, and volume up/down controls. Also, devices that belong to a domain that lack keyboards must support Windows Key + Power as an alternative to the Ctrl+Alt+Del key sequence (the infamous “three-fingered salute” for those who remember DOS and early Windows versions).
  • Minimum component requirements include 10 GB of free storage space after Win8 is installed, support for UEFI (see my 9/23/2011 blog for more info), WLAN, Bluetooth and LE networking, Direct3D 10 graphics with a WDDM 1.2 driver, 1366×768 screen resolution or better, 720p camera, an ambient light sensor, a magnetometer and an accelerometer, a gyroscope, at least one USB 2.0 controller and exposed port (or better), and audio speakers. Sounds like a decent smartphone or tablet, eh?
  • Graphic drivers must be upgradeable without requiring a reboot (easier to do thanks to dropping XDDM drivers and keeping only WDDM drivers for Windows 8).
  • 2-second resume(does not apply to ARM-based devices yet) from Standby (S3) to “resume complete” status. Rivera speculates “Microsoft simply doesn’t have enough data in this space” to impose the same restriction on ARM as on x86; Thurrott speculates that “it will be added in a future release, such as Windows 9.”

Veeeeeeeeeeeeeeery interesting. Be sure to check out the original posts, too!


January 18, 2012  4:18 PM

Windows 8′s “next generation file system” gets a preview



Posted by: Ed Tittel
ReFS offers interesting new file system capabilities, Resilient File System or ReFS to supplement New Technology File System or NTFS in Windows 8

Building Windows 8 strikes again, this time with a 1/16/2012 blog from Surendra Verma, development manager for the Windows 8 Storage and File system team. It’s entitled “Building the next generation file system for Windows: ReFS.” The ReFS acronym is expanded in the FAQ that follows the blog post to mean “Resilient File System;” that same FAQ also documents some awe-inspiring maximum file system attributes.

What’s New About ReFS?

Verma takes several cuts at answering this question without necessary addressing it completely explicitly, so I’m doing my best to read between the lines here. It appears that ReFS comes with a new engine for file access and management, and also includes enhanced verification and auto-correction facilities. In Verma’s words, ReFS is optimized for “extreme scale” to “use scalable structures for everything.” Interestingly, ReFS volume will never be taken offline: even when corruption occurs, unaffected volume elements will remain available and accessible. And a new resiliency architecture (when used in concert with Storage Spaces) will help to protect and preserve volume contents. Verma’s list of key features also highlights plenty of new capability: 

  • Metadata integrity with checksums
  • Integrity streams providing optional user data integrity
  • Allocate on write transactional model for robust disk updates (also known as copy on write)
  • Large volume, file and directory sizes
  • Storage pooling and virtualization makes file system creation and management easy
  • Data striping for performance (bandwidth can be managed) and redundancy for fault tolerance
  • Disk scrubbing for protection against latent disk errors
  • Resiliency to corruptions with “salvage” for maximum volume availability in all cases
  • Shared storage pools across machines for additional failure tolerance and load balancing
Some changes underneath, but not much change on top

Some changes underneath, but not much change on top

What Remains in Common between ReFS and NTFS?

Not surprisingly, Microsoft has also elected to “maintain a high degree of compatibility with a subset of NTFS features that are widely adopted” although they do intend to “deprecate other … [features] … that provide limited value at the cost of system complexity and footprint.” This means that BitLocker encryption, access-control lists (ACLs), the USN journal, change notifications, symbolic links, junction, mount and reparse points, volume snapshots, file IDs, and oplocks will remain the same as those used in NTFS.

However, there will be no conversion utility to transform NTFS formatted volumes to ReFS. Instead Microsoft advocates copying data from NTFS into ReFS to make the change, and also to allow the resulting ReFS volumes, file collections, and data sets to enjoy the benefits of the new ReFS structures, maximum sizes, and resiliency. For the time being, ReFS will not be used to boot volumes in Windows 8 (though I’ve seen some discussions of making ReFS volumes bootable for the upcoming version of Windows Server due out late 2012 or early 2013).

What’s Gone from ReFS?

Verma does address this as “What semantics or features of NTFS are no longer supported on ReFS?”

The NTFS features we have chosen to not support in ReFS are: named streams, object IDs, short names, compression, file level encryption (EFS), user data transactions, sparse, hard-links, extended attributes, and quotas.

This is an interesting grab-bag of features, among which there are some obvious items: EFS was never that popular, and had sufficient problems to make lots of enterprise users steer clear, short names have been passe for 10 years or more, and named streams (aka “alternate streams”) have posed some clear and present security dangers. Extended attribute elements (also including object IDs) support the distributed link tracking service, which can be convenient but also had some problems, as did sparse files and hard links. It’s a little surprising to see file compression and quotas going away, and user data transaction handling was never implemented much anyway.

Once again, it’s “in with the old, and out with the new.” This should give Mark Russinovich and his colleagues something interesting to update in the next edition of Windows Internals, which should give me an opportunity to understand the impact of these changes in more detail.


January 16, 2012  11:47 PM

Compatibility Rears Its Ugly Head for Bluetooth



Posted by: Ed Tittel
Bluetooth driver difficulties on Acer 5552 notebook PC, interesting Bluetooth compatibility issues with Lego MindStorms

Ah, life can be strange when trying to get devices to play nice together. In this particular case, I decided to add a Bluetooth dongle to my son’s Acer 5552 notebook PC. With its AMD Athlon II P340 Dual-Core CPU, 4 GB RAM, and a Western Digital 5,400 RPM 250 GB hard disk, it’s a real little workhorse that matches my old reliable Dell D620 in overall performance and capability, but the killer deal I got from TigerDirect didn’t include Bluetooth in its $317 price tag.

The Acer 5552 is a modest but capable budget notebook PC

The Acer 5552 is a modest but capable budget notebook PC

If you read my IT Career JumpStart blog, you may recall that the one dated 12/26/2011 recounted some interesting adventures in getting my son Gregory’s Lego MindStorms robot kit up and going. In the weeks that have followed, he and I have added sensors to his robot, and have spent a number of happy and busy hours programming the machine to do all kinds of interesting stuff. When I discovered that the NXT 2.0 controller in his kit supports Bluetooth, I naively assumed that it would be easy to get notebook and the robot communicating with each other, especially when I was able to get the built-in Bluetooth on my HP dv6t traveling notebook and the NXT 2.0 controller working more or less instantly (after puzzling through the often cryptic menu navigation on the controller’s 2×3″ LCD screen, that is).

But wow, was I ever surprised by how much time, effort, and running around (with a certain amount of blue language for my Bluetooth issues along the way) was need to get things working with a plug-in USB dongle on the Acer notebook. I started out with the TrendNet TBW-106UB USB Class 1 Bluetooth v2.0 and v2.1 with EDR dongle I’d purchased for an HP course I revised in early 2010 that required me to write about Bluetooth printer connections (which I can now confess I plugged into a Dell 968 All-in-One Printer rather than an HP unit). But alas, though the TrendNet dongle does Plug-n-Play faultlessly on the printer, I couldn’t get it to work with the Acer 5552. I messed with driver after driver, and even went down another rathole related to an update to the BlueSoleil software that TrendNet supplied with the dongle.

The TrendNet TBW-106UB

The TrendNet TBW-106UB

Later, in reviewing the NXT documentation I learned that its Bluetooth works only with the generic Microsoft and the Widcom Bluetooth stacks (the BlueSoleil stuff is specifically identified as incompatible, alas). So off I went to the Fry’s website to see what kinds of USB Bluetooth dongles they had in stock at my local store, and ended up grabbing a nice Asus USB-BT211 from them for $14.99 (with a $5.00 mail-in rebate that brought the price down to a paltry $10, or about $17 less than the BlueSoleil software I purchased to updated the TrendNet unit, only to learn I couldn’t use it with the NXT robot anyway…sigh). I was able to update the drivers all the way up to the most current March 2011 Atheros AR3011 version available from the DriverAgent website, too, and keep both the notebook and the NXT 2.0 controller happy with its Bluetooth capability.

The cheaper Asus USB-BT211 did the trick

The cheaper Asus USB-BT211 did the trick

But that took some doing. I quickly learned I had to uninstall old versions of drivers before installing new ones, and equally quickly learned it was easier to take advantage of the restore point created when installing an outdated driver as a better way of restoring the 5552 to a pristine state than to rely on the uninstall code that the driver writers furnished with their software.

And now, finally, we can use the 5552 as a wireless remote to drive the robot around (fun for the boy) and as a handy-dandy way to download programs from the notebook to the robot controller (convenient for us, and doesn’t require a relatively short 1 m tether for a USB Type A-to-Type B plug-in cable between the two devices, either). This was my first big Bluetooth fracas, though, and I must confess I learned an awful lot about the various available drivers, protocol stacks, and the Widcom and BlueSoleil Bluetooth utility packages.

On the whole, I would’ve been just as happy, or perhaps even happier, to have just had the whole thing work plug-n-play as it did immediately on my HP notebook. But that’s not always the way things go in Windows World, is it?


January 13, 2012  5:19 PM

Waaaay Cool: Virtualizing Storage in Windows 8



Posted by: Ed Tittel
Win8 offers native expandable logical storage, Win8 Storage Spaces act like WHS Drive Extender, Win8 Storage Spaces is a killer new feature

A week ago, the latest post to the Windows 8 blog was entitled “Virtualizing storage for scale, resiliency, and efficiency.” And while the whole thing is definitely worth perusing, I want to zoom in on something that catches my particular fancy in this blog (it also caught Word and Windows guru Woody Leonhard’s attention in the latest Windows Secrets newsletter, too, which is what initially brought it to my attention; his piece is called “Storage Spaces might be Win8′s best feature yet“).

But first, I have to explain why I think the Storage Spaces feature is so awesome, by hearkening back to another hobby horse of mine — namely, Windows Home Server, aka WHS (I’ve blogged five times on this platform right here: once in 2008, and twice each in 2009 [1,2] and 2011 [1,2]). Turns out that Windows Server used to support a feature called Drive Extender that essentially allowed you to shrink or grow a multiple-disk logical volume simply by adding or removing space, then doing likewise with hard disks, on the system. No need for migration, recopying, partition re-sizing, Disk Management, and so forth: just a few commands to tell the file system what’s going on, and smooth sailing throughout.

I was incredibly bummed when it became clear that WHS 2011 (the latest and greatest version, out last year) did not continue earlier versions’ support for this feature, and thought perhaps we might never see its like again in the Windows World. I was wrong! In Windows 8, Storage Spaces lets users set up logical volumes that are bigger than the combined disk space provided for them, so you can grow the physical space over time to match the logical definition (Microsoft calls this “thin provisioning”). Here’s how Rajeev Naga, a group manager for Microsoft’s Windows 8 Storage and File System team explains what Storage Spaces can do:

Organization of physical disks into storage pools, which can be easily expanded by simply adding disks. These disks can be connected either through USB, SATA (Serial ATA), or SAS (Serial Attached SCSI). A storage pool can be composed of heterogeneous physical disks – different sized physical disks accessible via different storage interconnects.

Usage of virtual disks (also known as spaces), which behave just like physical disks for all purposes. However, spaces also have powerful new capabilities associated with them such as thin provisioning (more about that later), as well as resiliency to failures of underlying physical media.

And if and when you exceed the original maximum allocation for this storage pool, Storage Spaces simply notifies you in a pop-up window that you need to add more capacity, which you can do by plugging in more disks, then allocating them to that pool. Even cooler, pools can be designated as mirrored so that there will always be at least two (possibly three) copies of data on different physical disks within that pool. And should any disk fail, Storage Spaces automatically regens data copies for all affected holdings as long as enough alternate physical disks remain available within that pool. It’s not RAID, but it is fault-tolerant and resilient, and a great technology for all kinds of SOHO and end-user applications. The parity  designation also permits data to be reconstructed in the face of physical disk failures. They work best for big, relatively static files (like music, movies, home videos, and so forth) which are sequential in nature and require little or no updates to existing files.

Here’s how to work with Storage Spaces. You can use Windows PowerShell commands to create a storage pool, and to set up Storage Spaces that reside in that pool, or you can use the Storage Spaces item in Control Panel, to go through that process. This means selecting drives, then naming a storage space, assigning a drive letter, establishing the layout (default, mirrored, or parity), and assigning a maximum size. Afterward, you can always add more drives as needed, or make changes to configuration data likewise. Right now, the Win8 Developer Preview only supports storage spaces up to 2 TB in size, but when the Win8 Beta ships in February (next month, that is) you’ll be able to create storage spaces of arbitrary size.

What a great feature to bring to the Windows desktop. This may indeed be a true “killer feature” that prompts users to to upgrade to the new OS to exploit its capabilities, particularly those with big media collections!


January 11, 2012  11:15 PM

SmartDeploy Does “Drivers and Deltas” RIGHT for OS Images



Posted by: Ed Tittel
SmartDeploy offers delta mechanism to make component-level changes to existing Windows images, SmartDeploy separates drivers from OS elements to manage images effectively

On November 28, I wrote a blog entitled “PSmartDeploy Helps Ease Heterogeneous OS Situations,” in which I promised a follow-up early this year to provide more technical details about how the company makes a science of the slippery art of matching Windows drivers to Windows OS installations. I got my info fix on this topic, and more, in a conversation with SmartDeploy’s Director of Sales, Spencer Dunford (an uncommonly technical sales guy, if ever I ran across one) and his colleague, Senior Systems Engineer, Erik Nymark, this past Monday (January 9, 2011).

The patented technology that SmartDeploy uses for its driver magic makes perfect sense, and probably reduces a lot of painstaking work and serious reverse engineering, to build on the great work that Microsoft has done with the Windows Pre-Installation Environment (aka WinPE) and its boon companion, the Windows Image (WIM) format and container mechanism that Windows Vista, 7, and 2008 Server versions use to manage OS installation and packaging of image information for same.

Prowess SmartDeploy uses WIM as a storage format and data container for all of its packaging and image presentation methods. It has worked its way deeply enough into WIM internals, in fact, that it does the following to deliver WIM data elements for use in actual deployments (where individual components get combined into a valid WIM container that’s used as the focus for image installations):

  • Hardware/device drivers are broken out into a separate component file that can be built and managed independently of OS files and components. These are known as “platform packs” and confer some interesting advantages from their use: they are based on device enumeration from a reference image, and therefore need only contain those drivers that some particular Windows installation actually needs, rather than a sizable galaxy of possible device drivers that any foreseeable Windows installation might need. Thus for example, the DriverStore folder in my Windows 7 installation is 1.07 GB in size (which represents possible drivers) while the Drivers folder is under 58 MB in size (which represents the drivers it’s actually using). Essentially, the SmartDeploy approach allows the OS build to zero in only on the drivers it needs, and omit everything else, to achieve considerable image size reductions.
  • The OS and other components travels along for deployment as well, and remains logically and physically separate from the driver stuff. This makes it alot easier to mix and match various platform packs with the standard OS portion to support environment where as many as 60 or 70 different reference images must be supported.
  • Perhaps most interesting, SmartDeploy can also support “delta WIMs” so that an existing image can be merged with a set of changes (new updates, service packs, and so forth) without having to transport the entire image from a deployment server to its deployment targets. Thus instead of moving typical images that can be anywhere from 5 to 25 GB in size to replace an old image with a new one, the existing image can be updated in situ with the SmartDeploy tools and laid down to replace its previous incarnation entirely in place. This often means that delta WIMS of 500 MB to no more than a couple of GB in size will do the job quite nicely.

Misters Dunford and Nymark also showed me some very cool demos that showed these technologies at work, and performed a live update of a Windows 7 Professional image with a delta WIM that took less than 15 minutes to complete, and even less time to set up and stage for deployment. SmartDeploy makes excellent use of Wizards to structure initial images, and to build deltas upon those images, and also includes some slick automation for creating answer files to perform post-install tasks for adding applications and other runtime components to OS images as they get deployed to target machines.

The biggest benefit of their tools and technology is that, according to Dunford, they “…have seen a change in how frequently and how often images are updated, and patches or changes are deployed in image format.” Anything that makes this kind of job easier and faster to accomplish is definitely worth digging into. Check out the SmartDeploy website for more information.


January 9, 2012  4:04 PM

Interesting Detour into Legacy Land: FAT-16 and SpinRite



Posted by: Ed Tittel
SpinRite v6.0 must use IDE mode BIOS to work properly, Windows 7 install encounters interesting issues with FAT-16 UFD

Among the many elements in my diagnostic and repair toolkit, I have a licensed copy of Steve Gibson’s SpinRite v6.0, purchased in 2006 (I still have a copy of the receipt in my email archives, and I’m glad I did for reasons I’ll elaborate upon later). I’ve been messing around with a very nice little 2.5″ drive cage that parks itself in a standard 5.25″ desktop PC drive bay — see this blog at edtittel.com “Great Product for Recycling 2.5″ Notebook Drives” for more info on that recent adventure. This led to some fiddling about with those recycled drives to transform them from OEM and/or home-built system drives with repair and hidden partitions into standard, single-partition data drives. And, in turn this led to the need for some low-level repair on one of the drives involved (luckily for me it was in the disk area devoted to a repair/recovery partition that I had never used, or it surely would’ve made a bad situation worse). And finally, to close this circle, that’s what led me back to SpinRite.

But, as usual with Windows, there were a couple of interesting flies in the ointment to content with. One is apparently a problem with my production desktop (a prospect that never fails to darken my day), where it simply refused to recognize the FAT-16 bootable UFD upon which SpinRite is installed. When I stuck it into my production system to make sure the UFD was still OK, I ended up wandering into an interesting hall of mirrors. This system doesn’t see the UFD (that’s MS-jargon for USB Flash Drive, in case the acronym is unfamiliar to some readers), except as an anonymous entry in the Disk Management utility:

You can see the drive in the partition map pane below but it's absent from the Volume list above

You can see the drive in the partition map pane below but it is missing from the Volume list above

And although USB gives me the cheerful French horn blast when I stick the drive into that machine, I don’t get the usual drive recognized pop-up window that asks me if I want to do something with it, either (as you’d expect, since the file system apparently doesn’t recognize the drive as a legitimate disk voluime).

But where it gets interesting is that another of my 32-bit Windows 7 systems and a 64-bit system DO recognize the drive, see it as a legit volume, and happily assign it a drive letter as well:

No problem seeing the UFD as a legit named volume on other systems

No problem seeing the UFD as a legit named volume on other systems

Once I was able to resolve the contents of the SpinRite UFD and ascertain its contents were still complete and correct (easy enough to resolve by downloading the files, and extracting them to disk to compare file names and file sizes, which is where my packrat tendencies to keep all receipts allowed me to grab a new copy from the Gibson Research Website and do due diligence), I was ready to try repairs on my slightly wonky WD 500 GB notebook drive. (Alas, I’m going to use this apparent problem with the file system on my production desktop to spur me into the upgrade to 64-bit that I’ve been putting off for too long now. I’m not sure I  know how to solve this kind of file system issue, and it’s probably easier just to wipe the system drive, upgrade from 4 to 12 GB of RAM, and move up onto a more capacious working environment.)

That’s when I hit the next bump in the road: SpinRite would load FreeDOS and get going but it would never get past the “looking for storage devices” phase of its start-up on my test machine. A little quick research quickly revealed that not all BIOSes are created equal, and apparently the one that gets loaded when a PC boots into AHCI (or RAID) mode is not compatible with the BIOS that SpinRite needs to do its thing.  With a little trepidation, I made the switch, then booted back up into the UFD. This time, SpinRite ran fine, and after about 6 hours of repair, fixed (or routed around) what ailed my errant disk drive. It seems that the AHCI and RAID BIOS developers cut enough corners in building their versions of the Basic Input-Output System that Mr. Gibson can’t use it to dig deeply and deviously into a drive’s contents with a disk controller that employs such a BIOS. After problems I’ve had with SATA-IDE vs. SATA-AHCI booting issues, I heaved a sigh of relief after switching back to AHCI on the next reboot to my system drive and seeing everything work properly.

But in the end it all turned out to be easy enough to work around, thanks to the presence of multiple machines I could use to figure things out along the way. Now, I just have to find time to upgrade my production desktop to 64-bit Windows and life will be good. Yeah, right!


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: