Last Thursday, right after applying the second MS12-078 patch to my production Windows desktop machine, Identity Safe in Norton Internet Security 2013 quit working. I didn’t have time to deal with this until the weekend, and worked around the loss of access to my password vault because I’m lucky enough to have memorized all of my really important passwords for my most frequently-accessed password-protected Web assets. The basic symptom was as follows: every time I tried to access Identity Safe, I first had to login to my online Norton account, but each such login attempt would end unsuccessfully with an error message that the software was unable to access the Internet. This, despite my ready ability to login directly to my Norton account online, and actually see the contents of the Identity Safe/Vault with my own eyeballs.
A little quick Web research showed that hundreds, if not thousands, of other Norton users have suffered from the same problem, some as far back as this summer. Thus, although my symptoms didn’t manifest until Thursday, December, 20, the day that the second coming of MS12-078/KB2753842 occurred, it’s not at all necessary that those two events be related to each other. After trying various fixes and workarounds from the Support pages and from advice dispensed to other fellow sufferers, I got exactly nowhere.
So, yesterday I finally took the time to open an online chat with Norton Tech Support and ask for some professional help. It turns out that a remove-and-replace operation on NIS 2013 was necessary to restore Web-based access to the password vault. But somehow during the remove-reinstall maneuver, my original database of account and password information got trashed. The Norton tech support pro valiantly tried to restore the contents of the vault using the local backup file that the program creates in the C:\Program Data\Norton\<<SID>>\NIS_184.108.40.206\IdentitySafeDataStore folder every day, but the Import operation on that file returns the error “file unreadable or corrupt.” I jumped in to help out, and started using Acronis True Image Home to mount images for backups going into late November, but each had the same results: no luck. After about 90 minutes on the phone, we decided to give up and I was faced with having to rebuild all 243 of my accounts and password manually. Ouch!
Later that afternoon, running an errand with the family, I had an idea: Why not dig further back into my backup trail, and find an identity safe snapshot that predated my adoption of the Norton Vault (Web-based technology that superseded local disk-based Identity Safe stuff)? Sure enough, I found a file named IDDStor2.dat in the same folder dated July 5, 2012, and was able to import it into my current Vault environment without difficulty. And then, in looking at the directory structure of the afore-cited Program Data director for Norton stuff, I observed a pattern that probably indicated why our import efforts failed:
I have to believe that the folder names represent some kind of unique ID or hash value, and that the reason the import failed is because the ID or hash value is used to decrypt the contents of the locally-stored vault information. That’s because the 8d2… folder name represents where vault stuff lived when the previous NIS 2013 installation was active, but the 5e0… folder took over when the latest installation occurred. By my reckoning, the error that caused all this grief was the failure to export the local vault contents to a more readable file format before reinstalling NIS 2013 and breaking the link that permitted the vault to be properly decrypted at runtime. Thus, I’m adding a manual export of that data to a file on another drive to my weeky list of system maintenance chores. I hope it’ll keep me out of hot water for the foreseeable future.
I’m lucky: I only lost a couple dozen pairs of account-password information when I reverted by to July’s vault snapshot. I also keep my logins and account information in a special email folder (the welcome messages that follow new account setups don’t always provide all the necessary details but they do make it possible to recover them) so I’ve been able to keep up my daily routine without losing too much time or effort to get back to where I started. Other folks who face this kind of problem may not be so lucky, however. For them, some time and effort — and possibly even heartbreak — might be involved in returning to status quo. So, to all who use the online Norton Vault, I strongly recommend making an occasional export of the Vault contents. That way, you’ll never lose more than what occurs between the time of the snapshot and the time at which you lose access to your current vault contents.
I asked the Norton Tech Support rep who helped me out to communicate to his superiors that it’s not acceptable to create an environment wherein complete loss of important data can occur, especially for something as important as an account-password store. I sincerely hope they come with a more foolproof way to protect such information in the event of software problems or failures, and that this kind of situation isn’t allowed to persist.
Security bulletin MS12-078, originally released on 12/11/2012, was updated yesterday (12/20/2012). This is a critical patch that seeks to address “vulnerabilities in Windows Kernel-Mode drivers [that] could allow remote code execution,” so it’s a pretty big deal. The update has to do with corrections to the way that Windows kernel-mode drivers handle objects in memory. So far there have been some reports of font corruption in Windows XP and Windows 7 as a result of this fix, which manifested as disappearing fonts in PowerPoint, Corel Draw, Quark Express, Flexi, and other graphics/layout applications. Apparently, this is what occasioned the out-of-band “band-aid” on December 20 to fix the bugs introduced by the original version of the MS12-078 update released on December 11. There’s a good story about this on the Infoworld Tech Watch by Windows and Office guru Woody Leonhard dated December 14 — I wish he’d update to follow up with coverage of the band-aid — and he has done so, see his “re-issue” blog at the same site.
In today’s follow-up post, Woody identifies a “list of borked apps” as follows: Quark Xpress, Quark CopyDesk, FlexiSign, SignLab, Musescore, Avid Marquee, Bentley MicroStation, Inkscape, Xara, Extensis, Serif PagePlus, Document Toolkit, Flash in design mode, and most embarrassingly PowerPoint and, reportedly, Excel. That’s quite a handful!
Apparently, it’s tied into the OpenType Compact Font Format (CFF) driver, as documented in MS KB article 2753842. Yesterday MS amended the KB article to add this language:
The original version of security update 2753842 had an issue related to OTF (OpenType Font) rendering in applications such as PowerPoint on affected versions of Windows. This issue was resolved in the version of this security update that was rereleased on December 20, 2012.
The upshot of this re-issue is that you need to install this new version of MS12-079/KB s753842 whether or not ou installed the original, bug-infested version. That’s why many users will see the patch offered a second time (the first time on or around December 11, the second time starting yesterday, December 20). It’s the only way to fix the bugs that the original patch introduced, while also addressing the security vulnerability that both patches were intended to address, even though the first one didn’t do so in the most efficacious way.
OK, I admit it. I’m often a “better late than never” kind of guy. I’ve been writing about UEFI and Windows 8/Windows Server 2012 since September 2011, but I’ve just now finally gotten around to performing an intentional and fully-functional UEFI install of Windows 8.
This comes about thanks in very large part to postings on the Windows Eight Forums by forum meister Shawn Brink (aka Brink) and an anonymous poster named Arkhi, respectively entitled:
- How to Create a Bootable USB Flash Drive for UEFI in Windows 7 and Windows 8 (dated 12/13/2012; Brink)
- How to Install Windows 8 Using the “Unified Extensible Firmware Interface (dated 9/15/2011; Arkhi)
The steps involved are fairly straightforward once you assemble the correct ingredients and build yourself a Windows 8 installation bootable USB flash drive for UEFI. The other necessary ingredient is a target system drive that is completely blank (if you plan to recycle a drive that’s already been used as a system drive without a UEFI boot, you’ll need to remove all partitions from that device so it shows up in the Disk Management utility as unallocated space. If you need to prep the drive, you can use the
diskpart utility to do this from an elevated command prompt (right-click cmd.exe, then select Run as administrator; you’ll be wiping the contents of the entire drive so if there’s anything on it you might ever need again be sure to back it up beforehand):
#note the disk number for the drive you want to wipe clean; I'll use 5 as the number in the example code that follows
select disk 5
Launch the Windows 8 installer, then when the “Where do you want to install Windows?” screen appears, highlight the blank target drive, and select the New entry. Click Apply, and then OK. The disk will be formatted using GPT (GUID partition table) into 4 partitions as follows:
- Partition 1: Recovery
- Partition 2: System (an EFI system partition that houses NTLDR, HAL, boot.txt, drivers, and other key system boot files)
- Partition 3: MSR (a Microsoft Reserved partition that reserves space on the drive exclusively for subsequent OS use)
- Partition 4: Primary (this is where your Windows OS will actually, and serves as the Windows system partition)
Windows 8 must be installed to Partition 4, the Primary partition. At this point, UEFI install is properly set up and you can proceed with a clean install of Windows 8 from here. If you get an error message that reads “Windows can’t be installed on drive X partition Y” don’t worry unless you can’t click the next button (this is apparently an occasional glitch in the installer, which works properly despite the error message). One more word of warning: UEFI install works only with 64-bit Windows 8; 32-bit Windows 8 versions are not supported!
Windows 8 has been out for almost two months now in GA form, but the numbers at NetMarketShare still barely register its presence. A quick look at the November 2012 OS pie chart shows that Windows 8 fails to register, except as part of the “other” category.
A further look into the text details shows that Windows 8 registers just behind Linux, whose 1.25% share still leads Windows 8’s 1.09% by 0.16%. It should be interesting to follow the growth of Windows 8’s share of the market pie over the months ahead to see how quickly it can edge its way past Linux and into Mac OS territory (more than 1.25% but less than 2.19%). I predict that it could be as long as one year before Windows 8 edges past the other OSes and starts encroaching into Windows Vista’s current 5.7% marketshare. It’s possible that Windows 8’s move past those non-Windows entries could also coincide with surpassing Vista, but I’m not yet convinced it can must enough upward momentum to climb that far, that fast.
In any case, this should be an interesting marketshare pie to keep watching. Count on me to report back regularly as and when things start to change.
When is a PC not really a PC? When it’s a virtual machine (VM), of course. In that case, software is used to emulate a physical machine of sorts, as well as the operating system and applications it supports. Burning with curiosity this morning, I fired up DriverAgent to see its takes on what goes into the virtual innards of a Windows 8 VM in the company’s latest and greatest desktop environment.
The first thing I noticed about the listing was the lack of hardware details — such as, for example, no CPU (no Processor entry), no monitor, no physical storage devices, and so forth. The next thing is the presence of a number of numerous entries based on the old Intel 810 Chipset and its related 82371AB/EB controllers — including a PCI Bus Master IDE controller, a PCI to ISA bridge (ISA mode) controller, and a Pentium II Processor to PCI bridge. All of this points to an older and very generic Pentium II virtual machine architecture for Hyper-V desktop OSes. This makes sense, because a large number of virtualized device drivers are available for this virtual platform, and makes the rest of the plumbing for such VMs easy to define and hook up, including other no-name system elements like the COM1 communications port, the PCI bus, CMSO/real time clock, DMA controller, motherboard resources, numeric data processor, a programmable interrupt controller, and so forth. And when there’s IO to be done we see virtualized drivers including an Msft Virtual CD/ROM ATA device, and a Virtual HD ATA device.
All very simple if not also entirely rudimentary. Intel’s had emulators for all these parts kicking around for years. I’m pretty sure that where at least the baseline virtual PC elements in this architecture originate. It also gets interesting if you visit DriverAgent and right-click on the itty-bitty icons at the left of each entry that appears in the preceding screencap. Normally, anything with a “real driver” attaches to a link where you can find alternate drivers you might wish to substitute for what you’re currently using (at least on a non-virtual PC). As I click through the list, only the following elements have links attached:
- Intel 82371AB PCI Bus Master IDE Controllers PCI Drivers
- Microsoft Hyper-V S3 Cap (but the corresponding driver page is empty)
- Generic PnP Monitor (driver page empty)
- Intel 82371AB/EB PCI to ISA bridge (ISA mode)
- Intel 82443BX Pentium(r) II Processor to PCI Bridge PCI Drivers
- Microsoft Shared Fax Driver (driver page is empty)
- Microsoft XPS Document Writer v4 (driver page is empty)
Everything else is completely virtualized, probably integrated into the runtime virtual machine image that supports desktop operating systems inside Hyper-V v3. In pondering why Microsoft chose such simple, basic elements and such a rudimentary infrastructure for the virtual architecture, I’m struck by how far back in time the hardware upon which the virtual machine is modeled actually goes. It’s old enough to accommodate even “ancient” Windows versions (a lot of this stuff goes back to the 1999-2001 period) yet capable enough to provide useful PC emulation even for modern desktop operating systems like Windows 7 and 8. When I run either of those OSes on a modestly equipped host PC (i7 2600K with plenty of RAM) they are surprisingly fast and snappy, not much slower than the host OS itself, running on the same hardware.
Given that updating drivers for a virtual machine is fraught with peril, I’m glad that DriverAgent didn’t find any drivers it thinks need to be updated on the test VM where I captured the screenshot. I have to image that if and when such driver updates occur, they’ll be handling as part of a rebuild of the baseline VM images, and simply be passed down the line for ordinary users to test and deploy as they see fit.
I’m editing an upcoming book for Sybex from my friend and sometimes co-author Darril Gibson. Its current working title is MCSA: Microsoft Windows 8 Complete Study Guide: Exams 70-687 and 70-688 (ISBN-13: 978-1-118-55687-0), and it’s chock-full of interesting and useful information about Windows 8 internals, installation, deployment, and maintenance. I’ve been chewing my way through the Hyper-V chapters lately, and have had the unique pleasure of getting paid to learn stuff I need to know anyway. It’s been great fun, and it’s amped up my understanding of and appreciation for the latest Microsoft desktop OS. This book is due out in the March-April 2013 timeframe for those who may be interested in checking it out further…
In working with and installing VMs inside the Hyper-V environment on a couple of Windows 8 test machines to complete and evaluate the exercises that Darril provides in his book – worth the price of purchase all by themselves IMO – I’ve learned a number of very interesting lessons about working with VMs in Hyper-V on Windows 8, and made some equally interesting observations:
- For moderate usability and performance, Windows 8 and Windows Server 2012 VMs need RAM allocations of at least 3 GB (and with Windows and RAM, more is always better, as long as you can spare it—I try to leave no less than 8 GB of RAM for the host OS to do its thing at all times, which means I’m equipping my new systems with a minimum of 16 GB of RAM on notebooks, and 24-32 GB on desktops).
- Don’t forget you can use RDP to attach to VMs. It beats the default Virtual Machine Connection windows that double-clicking a VM entry opens through Hyper-V Manager: not only can you expand the RDP windows to fill the whole screen (for VMC windows, resolution is limited to 1024×768), but you also get access to audio from the VM, which the VMC window does not support. You don’t have to strike CTRL-ALT-Left Arrow to change the mouse focus, either: you can just mouse naturally in and out of the RDP window as you like.
- On PCs with (mostly smaller) SSD boot drives – my typical configuration nowadays, and I suspect likewise for many IT pros and PC enthusiasts – it’s necessary to move virtual hard disks (VHD and VHDX files) plus snapshots to another drive to keep from overstuffing storage on your system/boot drive. When you set up a VM you have the option to define an alternate location for such files. I strongly recommend you get in this habit, if space on your boot drive is constrained.
- In Darril’s book he recommends setting up Windows 8 and Windows Server 2012 test environments through a private virtual switch. This is probably a good way to run a test network day-to-day, but you must occasionally change the switch designation over to external (able to access the Internet through the host). First, this is necessary to activate Windows inside the individual VMs, and second, it’s needed to gain access to Windows Update to keep the patch levels of the guest OSes up-to-date and in synch with production environments.
- For test networks, Microsoft’s built-in Windows Defender anti-virus/anti-spyware package is plenty good enough for security protection. Unless you have to install a different security package to maintain parity with production networks, there’s no need to switch to something else. That said, you’ll want to exclude the physical disks where you store virtual hard disk images from antivirus scanning by the host system, not only because of potential performance impacts, but also because they’re already being scanned within the VMs anyway. OTOH, if your AV software permits process-level scanning exclusions, you’ll also want to exclude vmms.exe and vmwp.exe to keep additional process overhead out of the way.
- If possible, put your VHDs on a RAID 10 array, which offers striping performance with mirrored redundancy: this delivers the best I/O performance with the fault tolerance of mirroring. Takes more drives to implement this, to be sure, but offers the best of both worlds (performance and protection). Remember that static, fixed-size allocation VHDs also deliver better performance, while dynamically expanding VHDs help keep disk consumption lower. A classic speed-space trade-off!
- Hyper-V Dynamic Memory is the bomb! Though you must specify startup RAM and Maximum RAM sizes, it’s a boon to let Hyper-V manage memory consumption in real time. I recommend using a generous static memory allocation (4 GB or better for virtual desktops, 8 GB of better for virtual servers) along with Task Manager to observe test or reference VMs at work under various loads to record start-up and peak load memory consumption to set startup and maximum RAM allocations to make sure you don’t over- or under-provision them with RAM.
Given that RAM is so cheap these days (I just bought 32 GB – 4 x 8GB modules – for one of my desktops for a paltry $70) do your best to max out your systems with RAM if you want to work with Hyper-V. That way, you’ll be able to make sure the host OS and its guest VMs all have enough working space to get their jobs done!
If you work with Windows desktops, especially virtualized ones, you’re probably already wise to the wiles and virtues of working with Windows images, probably using some mix of virtual disk (.vhd or .vhdx) and Windows image (.wim) file formats. As you begin to work your way into Windows 8 images, you’ll find the built-in Windows Deployment Image Servicing and Management Tool, aka DISM, offers some interesting additions to and enhancements from its capabilities in Windows 7. DISM was also retro-fitted to Vista, but had to be downloaded in the form of the Windows Automated Installation Kit, aka WAIK, itself now superseded in Windows 8 with the Windows Assessment and Deployment kit, aka ADK. I’ve just started digging into the DISM utility more seriously, as I’m trying to work around an EFI disk partition issue on one of my Windows 8 desktops that’s preventing the new record image (recimg) command from capturing an image on that particular machine. Along the way to further understanding, I came across a peachy resource I wanted to share, because it’s likely to be as helpful to other readers as it’s already been to me — namely, the DISM technical reference from TechNet.
This reference not only includes a useful overview, it also includes a useful set of how-tos on using DISM, as well as the outright and typical command line reference information you’d expect for an important and complex management tool in any system administrator’s toolbox. So far, two items in the how-to collection have proven especially informative in my quest for a current refresh image for my Windows 8 desktops: they’re entitled Create and Manage a Windows Image and How to Take Inventory of an Image or Component. This items have helped me to better understand why, when, and how to use DISM in creating and manipulating Windows image files, and to get my head around the often-complex syntax of the DISM command. I’ve also discovered a CodePlex project called DISM GUI that presents a graphical shell around DISM (the following screenshot shows WIM information for a typical Windows image constructed for a bootable OS install UFD using the Windows 7 USB DVD Download Tool).
DISM GUI promises to make real work with DISM more straightforward, too, but I’m not deep enough into its ways and workings yet to comment intelligently on that scenario. All I can say at this point is “Looks good!”
Windows 8 offers at least one major improvement over previous Windows versions — namely, the ability to refresh the underlying operating system while keeping personal files intact. That said, if you use the built-in Windows image file for that purpose on your Windows 8 machines, you’ll end up having to re-install all of the applications you added to the bare-bones version that Microsoft delivers to users and sysadmins alike. But you can avoid the extra work involved in those re-installs, and save lots of time and effort, if you generate a custom image file (.WIM) from a fully tricked-out version of Windows instead. Launching the refresh process is simple: from the Windows 8 Start screen, simply type “refresh,” then select the “Refresh Your PC” option that shows up among the Settings choices. But before you get anywhere near this command, create a more usable Windows image to refresh into, by following these instructions:
1. Launch the Command Prompt with Administrative permissions (right-click cmd.exe and choose “Run as administrator” or click Windows+X to launch system utilities, and select Command Prompt (Admin). Click Yes when the UAC warning appears.
2. Create a folder wherein the custom refresh image file will reside. On smaller SSD-based systems, you may want to select a different disk drive. On my test system I created a folder named
E:\RefreshImage. OTOH, you can create the image on your C: drive, then copy it elsewhere later on (on my Lenovo X220, I moved it to an external HD after creation, because it was much faster to build the image on the 120 GB SSD, then move it to the USB external HD thereafter). It can take 20-30 minutes to build an image file, even on a fast SSD, so be prepared to let the PC chunk away in the background on this task. On the Lenovo, the image file consumed just over 17 GB; on my i7 desktop, it topped 30 GB.
3. Type the create image command with appropriate parameters:
recimg /createimage E:\RefreshImage. It creates a file named
CustomRefresh.wim in the target directory.
Done! When you create your image, the
recimg command automatically registers that image with Windows 8, so the OS knows it should use this image when you run the refresh utility. If you maintain multiple images, you can select which one to register by using the
/deregister (to prevent Windows from using the most current image with
/setcurrent (to establish the refresh association you want) parameters with the
recimg command. Of course, now you need to remember to generate a fresh new image each time you add or remove another application from your Windows 8 desktop, or after applying patches or hotfixes to the OS. That way, you’ll keep your image current should you need to refresh it.
The refresh facility was a very nice addition to Windows 8. Using the recimg command to capture fresh images on an as-need basis makes it even nicer! Also, check out the free SlimUtilities RecImg Manager program, which puts a user-friendly graphical UI around the command line function.
Generally speaking, if your PC (desktop, notebook, or tablet) runs an i3, i5, or i7 CPU you should be able to install and use Client Hyper-V on a Windows 8 installation on that machine. But the only way to be 100% sure is to check the CPU’s various hardware capabilities to see what you’ve got at your disposal (please note: with only a very few exceptions — such as running Windows XP Mode inside a 32-bit Windows 7 VM on Client Hyper-V in Windows 8 — you can’t run a VM within a VM using Client Hyper-V).
The easiest way to check your PC’s ability to run Client Hyper-V is to download the Windows Sysinternals utility known as coreinfo and run it at the command prompt on the target machine to see what kind of output you get from a command string that reads
coreinfo -v. The following screenshots are labeled to distinguish a system that can run Client Hyper-V from one that cannot.
Thus, the key differentiator is the asterisk (*) versus a dash (-), where the former means the feature is present, while the latter means it’s absent. For Client Hyper-V to work on a target system, both Intel hardware-assisted virtualization and Intel extended page tables (aka SLAT, or Second-Level Address Translation) must be present on Intel machines. On AMD processors this functionality is called Nested Page Tables (NPT or NP) in the context of AMD Virtualization (AMD-V) Technology. For the record, here’s a list of “AMD Processors with Rapid Virtualization Indexing Required to Run Hyper-V in Windows 8” from AMD itself.
This is what coreinfo output looks like from an AMD-based machine that (barely) meets the Hyper-V criteria:
For AMD-based systems that don’t meet Client Hyper-V requirements, either or both of the SVM or NP entries will have a minus sign (-) instead of an asterisk (*), to indicate that the feature is missing. Because SVM predates NP, it’s more likely that NP will be absent, though on the oldest AMD CPUs both NP and SVM will not be supported. If you get both asterisks, you can run Client Hyper-V on your target machine; if either or both hardware-assisted virtualization or SLAT support is absent, you can can’t run Hyper-V on your target machine. That’s it!
One of my favorite Windows pundits is Woody Leonhard (a bona fide Windows and Office guru). He definitely grabbed my attention, and started me thinking hard this morning, with his piece for the InfoWorld Tech Watch entitled “Is a new version of Windows 8 coming … every year?” He quotes Mary Jo Foley (ZDNet) and Tom Warren (The Verge) as circulating strong rumors to the effect that “yearly upgrades will be the norm for Windows soon.” The most profound basis for the upcoming Windows 8 versions currently code-named “Blue,” is to consolidate SDKs and APIs for Windows 8, Windows Phone 8, and Windows RT under a single umbrella. Woody quotes Tom Warren on this subject as follows:
Once Windows Blue is released, the Windows SDK will be updated to support the new release and Microsoft will stop accepting apps that are built specifically for Windows 8, pushing developers to create apps for Blue. Windows 8 apps will continue to run on Blue despite the planned SDK changes.
His re-interpretation of Warren’s remarks is consonant with my own — namely, that this updated version of the Windows SDK for Blue will cover all the bases, with requiring separate stuff (or compilations) for the various Winodws sub-platforms for ARM-based tablets, x86 platforms, or Windows Phone devices. Woody also remarks that “Once Blue is released, all new Windows Store aps will be required to work on all three platforms. At least, I think that’s what he’s saying.” FWIW, so do I, not just because of the admittedly opaque and obtuse language from Warren himself, but also because that’s what makes sense for users and developers alike (not to mention Microsoft, too, what with having to maintain only a single SDK and development environments once that convergence is completed).
Can this really happen in a year? It’s an ambitious goal, and there’s a lot of work for Microsoft to do to make things ready, after which there’ll be even more work for developers to do to catch up with the lastest state of the SDK and the tools that support it. Jury’s still out on the timing, but the idea is solid. I certainly hope MS can deliver on this goal, even if it takes a little longer than is currently forecast.