The real-PC version of Microsoft’s Surface tablet, the Surface Pro, becomes publicly available this Friday, February 9. So far, I’ve seen stories on the new tablet from Paul Thurrott (SuperSite for Windows), Ed Bott (The Bott Report, ZDNet), David Pogue (NYT), Jon Phillips (PCWorld), and countless others (run this Google News search to see dozens of serious, reputable reviews and commentary, amidst thousands of blogs, opinion pieces, news reports and more).
So far, here’s the emerging consensus:
1. Battery life is indeed a problem, as it was expected to be, with 4-6 hours of battery life available for varying usage scenarios.
2. Overall functionality and capability (aside from how long the battery holds its juice) are uniformly positive, but that positivity varies from lukewarm to medium, and seldom jumps into rabid enthusiasm.
3. The 1920×1080 true HD display gets uniformly good ratings for sharpness, clarity, and readability, and beats 1366×768 displays hands-down.
3. As a tablet, the Surface Pro gets ho-hum ratings from the experts; as an ultrabook, it does somewhat better, especially with the Type cover. It’s never positioned as a world-beater by anybody, though.
4. These days a 64 GB SDXC memory card — which the Surface Pro will gladly accommodate — costs anywhere from roughly $40 for a slow model, to upwards of $150 for a pretty fast one (over $200 for the fastest models). This provides an easy way to stretch the 89-plus GB of free storage space available on the Surface Pro’s 128 GB SSD. 128 GB SDXC cards should become available later this year, but will also be costly.
In general the experts are pretty broadly split on their decisions regarding the Surface Pro, with about half saying “So-so, not the greatest, not my cup of tea,” and the other half opining that for those looking for maximum portability with real Windows PC oomph it’s the best choice among the other offerings of its kind available from Acer, Samsung, Sony, and others. In revisiting about a dozen of those reviews, I note that the reviewer’s final opinion often rests on how strongly wedded they are to Windows computing in general, versus a more catholic or platform-agnostic view of the overall mobile computing space.
Ed Bott Sez It Best
IMO, Ed Bott summed up the unique position that the Surface Pro occupies in the PC marketplace today in his review for ZDNet entitled “Is the brilliant, quirky, flawed Surface Pro right for you?” Here’s what he has to say at the very tail end of his 3-page story, quoted verbatim:
…this is a great product for anyone who’s already committed to a Microsoft-centric work environment. It isn’t likely to inspire many iPad owners to switch, unless those Apple tablets are in the hands of someone who has been eagerly awaiting an excuse to execute the iTunes ecosystem.
I don’t expect Surface Pro to be a breakout hit for Microsoft. Too many people will have good reasons to say no, at least for now. But it does represent a solid, interesting, adventurous alternative for anyone who wants to spend some quality time today exploring Microsoft’s vision of the future.
To me, this makes the upcoming super-ultra-low voltage Haswell processors due out from Intel later this year even more interesting — and important for the future survival of the Surface Pro models, and perhaps also for the future of PC and notebook computing as we understand it today. If the new chips mean that Microsoft (and other hardware designers) can trim the extra 0.5″ that currently separates the RT from the Pro model (and others of those ilks) and boost battery life to the 8-10 hour range, by golly, we might just have something. If that doesn’t happen, though, the real question will become: Is there enough there to make buying into this hardware vision worth doing for business and more serious personal users. Methinks not. Methinks further that the emergence of the Chromebook phenomenon is likely to play hob with this entire set of market dynamics.
Dell has officially gone private.
The company has been in a transformation phase for some time as cloud computing, tablets, and mobile devices have eaten into its PC and server business. Industry watchers say that going private could help the company compete in new markets.
Here, in its entirety, is CEO Michael Dell’s email to company’s employees on what this move means for the business as a whole:
Today, we announced a definitive agreement for me and global technology investment firm Silver Lake to acquire Dell and take it private.
This transaction is an exciting new chapter for Dell, our team and our customers. We can immediately deliver value to stockholders, while continuing to execute our long-term growth strategy and focus on helping customers achieve their goals.
Together, we have built an incredible business that generates nearly $60 billion in annual revenue. We deliver enormous customer value through end-to-end solutions that are scalable, secure and easy to manage, and Enterprise Solutions and Services now account for 50 percent of our gross margins.
Dell’s transformation is well underway, but we recognize it will still take more time, investment and patience. I believe that we are better served with partners who will provide long-term support to help Dell innovate and accelerate the company’s transformation strategy. We’ll have the flexibility to continue organic and inorganic investment, and grow our business for the long term.
I am particularly pleased to be in partnership with Silver Lake, a world-class investment firm with an outstanding reputation and significant experience in the technology sector. They know all the technology business models, understand the value chain and have an extremely strong global network of contacts. I am also glad that Microsoft is part of the transaction, further building on a nearly 30-year relationship.
I am honored to continue serving as chairman and CEO, and I look forward to working with all of you, including our current senior leadership team, to accelerate our efforts. There is much more we can accomplish together. I am committed to this journey and I am grateful for your dedication and support. Please, stay focused on delivering results for our customers and our company.
There is still considerable work to be done, and undoubtedly both challenges and triumphs lie ahead, but as always, we are making the right decisions to position Dell, our team and our customers for long-term success.
Don’t ask me why — because I can’t answer that question — I chose to use one of my Centron 128 GB USB drives to build a recovery and repair boot-up disk this weekend. I used the Recovery Drive option on one of my Windows 8 machines. Here’s the crux of the matter: the disk partition for the repair medium gets formatted using FAT32, which limits the size of the resulting drive to 32 GB. Here’s a snapshot of the resulting drive from the Disk Management plug-in for the Windows Management Console (diskmgmt.msc):
Of course this isn’t a problem for USB flash drives of 32 GB or smaller, so probably the most important take-away for readers of this blog post is: don’t use UFD’s larger than 32 GB to build a Windows 8 Recovery Disk. If you do, you’ll find yourself in the same situation I found myself in after setting this up — namely, with a whole bunch of unallocated space on the resulting drive that remains inaccessible to that recovery disk itself.
The gotcha comes into play in that, unless you dig deep into the command-line diskpart.exe utility, you can’t clean up the drive that’s been set up using the built-in Windows 8 tool. Thus, for example, you can’t delete the bootable partition from what appears as the I: drive in the preceding screen capture. Even if you change its format from FAT32 to NTFS, diskmgmt still won’t let you extend or delete the resulting partition, either. The first time I cleaned things up, I turned to Paragon’s Hard Disk Manager to do the job, and instructed it to wipe the drive, then to create a single 119 GB partition (that’s the actual size of the drive, as reported in File Explorer and other native Windows utilities, as confirmed in the Disk Management screenshot above). But that took too long for my impatient self (the wipe drive is very thorough, to meet MilSpec data wiping standards, and took several hours to complete). The second time I found myself in this spot was when I created the screenshot that appears earlier in this post, after which I turned to diskpart for the ensuing clean-up. I started with the “list disk” command to ensure that I wanted to work on Disk 5 as shown above, then typed “select disk 5” to turn the utility’s focus to that drive, then finally “clean” to wipe out all of its existing disk format info. At that point, using diskmgmt. msc I was able to define a new simple volume, and format the drive to NTFS, and name it Centon128.
As an inveterate tinkerer, and somebody who’s always finding reasons to troubleshoot Windows systems, I’m always on the lookout for good tips, tweaks, tricks, and repairs. This morning I found a good one over on TechSpot, by Julio Franco entitled “Windows 8 Boot Issues? Try Fixing the Master Boot Record (MBR) or Boot Configuration Data (BCD).” I actually experienced the very issue he documents in that story — boot problems when adding a second SSD to a system, though mine stemmed from the installer’s practice of leaving a separate boot/recovery partition intact on the original boot drive, so that when I removed the original SSD from that system, the new system drive actually lacked a boot partition — about two weeks ago, and went through the very automatic repair scenario he describes at the first of the various fixes he describes therein.
Here’s a snap of the “Advanced Options” screen that Windows 8 presents when you elect to troubleshoot a Windows 8 installation during the boot-up process:
For more info on how to provoke this menu, and use its options and selections, see Kent Chen’s excellent story at Windows 7 Hacker entitled “Microsoft Layouts Windows 8 Boot Options,” itself based on a Building Windows 8 blog from May, 2012, that digs deeply into Windows 8 boot options and behavior. Good stuff, all the way around!
What I like best about Franco’s TechSpot article is that it marches you through the various options for boot repair quickly, with brief but helpful instructions on how to use each one effectively. He starts with automated repair, then digs into basic command line tools to achieve the same end. After that, he explains how to use BDCedit if necessary, and then points to the executable for the startup repair tool (StartRep.exe) as a last-ditch method when all else fails. And of course, if none of this works, it’s time to reformat (or replace) the boot drive, and then to restore your latest image backup!
This morning, I was poking around on the Windows 8 Forums site, and found a nifty tutorial on the improved check disk (chkdsk) utility that’s been built into Windows pretty much since Day 1 of its nearly three decades of life. Alas, there is an error in that tutorial that caused me a bit of stumbling around until I finally had the intelligence to call on the utility’s own built in help file (shown in the following screenshot, along with my attempt to use this new feature which garbage collects unneeded security descriptor data on the target drive):
Upon looking at the file I recognized that the security descriptor switch in the tutorial appears as “sdccleanup” when it should instead be “/sdcleanup”; likewise, “offlinescanandfix” should be “/offlinescanandfix” as well. But with these minor gaffes corrected, I was able to explore the new capabilities and see how they worked. I can’t say that the changes are laden with drama, but they do offer some nice new capabilities, including the security descriptor cleanup (which will recover increasingly more space on drives as files are added then deleted over time) and the spot fix capability, which performs limited repairs without requiring a system reboot (except when they are required on the system/boot volume, which will have to be performed immediately following the next reboot).
Good stuff: check it out!
At midnight, Wednesday, January 31, the Microsoft budget upgrade offer for Windows 8 expires. I just jumped up to the Windows 8 Upgrade Offer page, and learned that those promo codes Microsoft has been sending me via e-mail (thanks to my various Windows Live accounts) bring the price down from an already-awesome $39.99 to an even more stellar $14.99, if entered into the promo code field during the “pay for it before you get a key” phase of the ordering process.
Act fast, because the “deal” is off starting on February 2 (this Thursday).
If you don’t want to install the OS any time soon, you can quit the process as soon as you’ve paid for your new license and you get a key for a Windows 8 install. You can always grab the ISO file from other sources later on: as long as your key is good you can wait until you’re ready to install, well after the January 31 deadline comes and goes. In my case, I’ve already built a couple of bootable UFDs with x64 Windows 8 Pro install images — one for UEFI machines, the other for machines with conventional BIOSes. I’ve used them before for numerous Win8 installs, and I’ll use them again with my bargain basement keys.
And again: if you do have a promotional code for Windows 8, it takes the already low $39.99 cost down to an irresistible $14.99. But you must grab your key before midnight Wednesday to take advantage of this pricing. Don’t delay: do it now!!!
In the past three or four months, I’ve messed around with various takes on installing Windows — especially Windows 8 — on PCs sporting the Unified Extensible Firmware Interface, in lieu of the more traditional BIOS firmware used to raise PCs from a cold dead start to normal operation since time immemorial. Along the way, I’ve encountered lots of speculation, rumor, and word of mouth information on this fascinating topic. This morning, I finally came across a detailed reference in the TechNet Library that I wanted to share with all of my readers.
What you see to the left is a snippet from that element of the TechNet Library entitled “Phase 4: Image Deployment,” specifically the entry that’s highlighted in black: “Installing Windows to an EFI-Based Computer.” It explains that you must run Windows set-up, which may or may not take advantage of a special answer file to perform all kinds of interesting and intricate disk partitioning and formatting as part of the initial set-up process.
There’s an equally interesting article elsewhere in TechNet entitled “Sample: Configure UEFI/GPT-Based Hard Drive Partitions by Using Windows Setup” (and a variant that uses Windows PE and DiskPart instead). These items include step-by-step answer file entries (for the first of these two options) or ready-to-run script files (for the second) to handle the details of disk layout, partition assignment and sizing, and so forth and so on.
So far, this is the first and best detailed set of instructions and information on working with UEFI that I’ve found. It’s already helped me to make sense of the kinds of default disk layouts that Setup creates on its on when you perform a UEFI-based install, and I now understand how to increase the size of some of the non-Windows disk partitions that have occasionally given me trouble in the past (particularly the Windows RE tools, MSR, and Recovery Image partitions that Setup creates on its own).
What I still long to do, however, is to boot my UEFI PCs into the EFI shell immediately following start-up and learn how to work on my systems inside that pre-Windows boot run-time environment. I’ve read the Intel books (Beyond BIOS… and Harnessing the UEFI Shell) but I’ve yet to get to the command line and do anything with it after booting into EFI. Because I’m dying of interest and curiosity, I’m hoping some reader can recommend how I can built a boot USB or CD-ROM that will actually put me into a working run-time environment after booting into EFI.
Otherwise, these resources make me feel much more comfortable working with EFI-based installs, image captures, and deployments. Though I haven’t yet reached UEFI nirvana, that makes me feel a whole lot better about this stuff. Hopefully, other readers will benefit from access to these resources as well.
In my last blog, “What Gets Lost When Using Win8 Refresh,” I recounted my adventures after running the “Refresh your PC” facility built into Windows 8 on a machine that refused to let me use the record image (recimg) command to set up a current system image as my refresh basis. After additional work with the newly-refreshed system, I have to point out that what happened to my test machine was a worst-case scenario — namely, what happens when you permit the refresh to take your system back to the state it occupied just after you installed the OS.
Even so, this proved to be an extremely helpful maneuver for that system, and here’s why:
- Before refresh, the recimg command would not complete successfully. After refresh, it worked like a charm. As I pointed out in my last blog (having already captured a current image that included all the new drivers I’d had to install, as well as all of the apps and desktop applications that I decided were worth installing once again), this means even if you do have to try the worst case, you will probably only have to do that once.
- Before refresh, Hyper-V wasn’t working properly for me, either. I’d exported, then imported, my collection of virtual machines, to move them from a slower to a faster drive, only to discover that I couldn’t get them to work properly with an external switch to permit those virtual machines to access the Internet. None of the tweaks, tricks, or re-configurations I tried would fix this problem before the refresh. After the refresh, I had to redefine my Hyper-V settings and re-import my virtual hard disks (.vhdx files in all cases). But as soon as I did so, and defined a new external switch, everything worked just as it should have all along.
The stated purpose for refresh is to restore a troubled Windows installation to normal and stable operation. In this case, it took an install that was having issues with several key capabilities and replaced it with an install that showed none of those problems. Having troubleshot and noodled around with mysterious Windows gotchas for years, this strikes me as nothing short of miraculous. And if you take the time to create a custom refresh image for yourself at good opportunities (following a clean install, adding all Windows update that policy permits, updating all drivers, then installing all apps and applications to which users are allowed access, then after updates, driver changes, and adding new apps or applications thereafter), you can get back to a normal, stable state with minimal effort any time after that.
But the wonder and beauty of refresh comes with one interesting caveat: you should get in the habit of recording the complete file specification for the refresh images you save, especially those you don’t save on the system drive in the default directory (if your user systems are like mine, many of these use smaller SSDs for boot/system purposes, and nobody wants them cluttered up with big backup files). The following screencap illustrates why this is the case, showing the results of the recimg /showcurrent command, which displays the location for the most recently collected refresh image on a given system:
The mappings between Windows disk drive numbers and drive letters isn’t always obvious. For example, on this particular test system, the image file’s full specification in File Explorer is: F:/RefreshImage/CustomRefresh.wim. That’s what you have to use as the target when invoking the recimg command to restore that particular image, so it’s wise to record it somewhere, so you can provide the right specification should you ever need to use the recimg command to perform a refresh operation.
I’ve frequently looked at and pondered the meaning of the following “warning display” that precedes the use of Windows 8’s much-vaunted “Refresh your PC” maneuver. Last week, I actually launched this tool to truly understand what it would do to a PC if put to work. Going through those motions illuminated this warning with some interesting and — at least, for me — unforeseen implications of what’s really involved in the kind of refresh that returns Windows 8 to “factory fresh” settings.
As it turns out the promised list of apps removed is quite illuminating. Too bad it comes only after you’ve committed to performing a refresh. I’d recommend that MS consider performing a preliminary scan, and report this information before actually doing the refresh, so as to permit potential users of the utility to better assess the impact on their Windows 8 PCs. A quick look at this list gives me the opportunity to explain where I’m going with this and one great big honkin major gotcha that lurks therein:
Indeed, I expected my applications to be gone when I restarted my PC after doing the refresh. The warning is quite clear in that regard. But I didn’t realize that because installing Windows drivers often occurs in the content of running some kind of install utility, that the same thing would happen to the bulk of the device drivers installed on that PC as well. According to a favorite driver maintenance tool I use regularly — namely, DriverAgent — I had zero drivers out of date before I ran PC refresh. After running the refresh, I found myself with 21 (out of 69 total) drivers out of date, with all the lovely headache and aggravation that comes along with running down, obtaining, and installing Windows drivers these days. It wasn’t terribly difficult, but it did take more than half a day for me to figure out how to get those drivers installed and working after I’d laid hands on the most recent versions of the files involved. Now, my number of out of date drivers is down to one (it’s for an Intel 82579LM Gigabit Network Connection network interface I’m not actually using on that motherboard; though I’ve found the most current driver, I haven’t yet figured out how to install it on this particular unused device — that is, I can install it, but the install doesn’t seem to “take”).
7Zip Comes to a Partial, but Much-Appreciated Rescue
Along the way, I also learned an extremely valuable driver update technique. Entirely by accident (I picked the wrong right-button menu entry when opening a file) I discovered that 7Zip will open executable files and extract all their embedded contents where you tell it to put them. Because many driver updates come in installable packages (some of whose contents you may not want or be unable to install on your machine, as for example when seeking to apply a custom update for motherboard x against a completely different model y from a different manufacturer) this turns out to be a great way to grab the .inf, .cat, and .dll files that so often make up the actual drivers themselves, without having to work through an installer that might also want to load your machine down with unwanted management and supporting utilities along the way. The most extreme case of this comes from some Marvell disk controllers, which insist upon installing an outdated version of Apache server as part of their management infrastructure when run as-is. I don’t want or need that stuff (as I suspect many others also do not) but until I found this technique to get to the good stuff without also taking on (and then later manually deleting) unwanted elements, I never found an expeditious way to deal with this common driver issue. Even Legroom Software’s Universal Extractor (which has in the past proved incredibly useful in doing the same kind of thing) isn’t as quick or easy to use as 7Zip for this particular application. At the same time, 7Zip has shown itself able to unpack every .exe driver installer I’ve thrown at it, while Universal Extractor fails to do that job on about half of those same files nowadays.
The Real Value of the Windows 8
On December 7, 2012, I wrote a blog post here entitled “Create Your Own Refresh Image for Windows 8,” which explains how to use this command-line utility to capture a Windows image (.wim) file that the refresh command can later use as a “restore point” (or should that be “refresh point?”) in the future. I now understand that the real value of this approach is its ability to preserve all the drivers on a PC as well as the apps installed following system installation. One interesting side effect of my manual refresh of the system is that now that I’ve done this, the
recimg command is working (I had been working under the impression that the EFI partition on its system disk was preventing recimg from working, but it’s running on that system as I write these words) to capture my cleaned-up image for me. Should I need to refresh my PC again in the future, I no longer have to go back to ground zero! Now, if I could only figure out what screwed up in my original install in the first place… Sigh. Windows!
OK, by now everybody’s heard about the Department of Homeland Security’s Advisory (originally released on 1/10/2013, most recently updated yesterday, 1/17/2013). Here’s the meatiest part of that document’s recommendations:
Unless it is absolutely necessary to run Java in web browsers, disable it as described below, even after updating to 7u11. This will help mitigate other Java vulnerabilities that may be discovered in the future. [The advisory includes pointers to descriptions for how to disable Java in most major modern browsers, and there are plenty of other articles on the Web that explain how to do this for less popular ones, too.]
The guiding principle behind the DHS recommendation is risk avoidance — namely, that the only way to avoid future zero-day vulnerabilities in Java is to turn it off, since there appears to be no way to guarantee these can’t happen again. In fact, the very day after Oracle posted update version 11 (1/15/2013), a cybercrime forum posted a message that a new zero-day exploit kit for Java would be sold off to the two highest bidders at a starting price of $5K (source: InformationWeek Security). In fact, InformationWeek security maven Mathew J. Schwartz quite accurately labels Java an “attack magnet” in a recent story entitled “10 Facts: Secure Java For Business Use.” Among his recommendations that fall shy of what the DHS Advisory implores (“disable Java”), he mentions use of management tools like PolicyPak to restrict access to questionable or unauthorized Java code (and can even disable Java completely by policy, should that prove necessary). He also mentions use of white-listing tools such as NoScript for Firefox or Adblock Plus (for Chrome, Firefox, and Opera), both of which permit whitelisting of specific sites for active content while denying runtime access to all other active content.
My favorite among his recommendations is to maintain one browser to use for everyday surfing and Web access with Java disabled, and another, different browser to use only when accessing known good Java-based active content that must be used for legitimate business reasons. One would turn only to the Java-enable browser when circumstances compelled its use, and avoid using it otherwise. Schwartz also suggests that Oracle should patch faster, perhaps by devoting more resources to its upkeep and maintenance. The company’s planned two-year release cycle for Java, scheduled to begin with version 8 later in September, 2013, may or may not help to improve security. What would help, however, is to decouple the primary Java runtime environment from the Java browser extension, which means that end users often install and expose that extension to attack without even being aware of the exposure that creates, and the vulnerabilities to attack it presents. Schwartz quotes an expert from Stach & Liu as saying “Since so few websites legitimately use the Java browser extension, it is most prudent to disable it entirely” or perhaps to “only re-enable it for specific sites determined to be trustworthy.”
These days the rule of thumb for Java use seems to be “Use only when nothing else will work, and only when what’s used it known to be safe from potential vulnerability and attack.” Because it’s so hard to be sure, the DHS recommendation to disable first, and ask questions later, makes a depressing amount of sense. I still have to visit enough Java-based websites to write about them, that I’ve set up a special VM (snapshotted daily) where I keep a browser with Java enabled, and only work on that VM when I absolutely must use Java. If the worst happens, I can always toss an infected or exploited VM, and revert to the previous snapshot. It’s not completely foolproof or totally secure, but it does work, and it will protect my primary production runtime environment from attack and potential compromise.