I generally don’t spend a lot of time following the rumor mill for Windows 8, but if Windows 8 is going to ship in October of this year — and I’m still pretty convinced it will be out before Halloween — it’s about time for the Release to Manufacturing (RTM) version to head out the door so the OEMs can start prepping their systems for the eventual general availability (GA) release. To that end, I did tune into the usual sources for such information to see what was circulating around the rumor mill and found some degree of unanimity among some usual suspects (Mary Jo Foley, Gregg Keizer, Ed Oswald, and so forth). Most people think that the RTM will hit the week of July 16, across all of the sources that I recently polled and trolled.
The big push, of course, is to get the OS done in time for holiday pre-sales. Thus, even the end of October is pushing things out as far as they can go to hit this oh-so-necessary sales window. That’s the only way to get the machines with the new OS pre-loaded into buyer’s hands soon enough to take advantage of holiday buying. The real question then becomes: “Does anybody really WANT a Windows 8 PC — or even a Windows 8 Surface tablet, of the RT/ARM or Intel persuasion — under their tree?” I’m sure lots of folks in the Redmond area are having trouble sleeping at night fretting over that very question.
Hence my intention, and your possible look-out: to watch for an announcement of an RTM date soon. Might be as early as the end of next week, but probably not until some time the week after. But only time will tell, so stay tuned.
If you allow Windows to track and report on errors, every time your PC experiences some kind of problem it “phones home” to Redmond, and reports on what’s happened. It also promises to send you information about any related solutions that may come up as a result, but for most of us, a much more typical response to seeking solutions for such problems looks like this in the Action Center interface:
As it happens, however, Microsoft also researches the causes for and sources of such problems, thanks to the telemetry that delivers all this information to their tracking servers. They’ve just published their first-ever report on this data. It’s called “Cycles, Cells, and Platters: An Empirical Analysis of Hardware Failures on a Million Consumer PCs.” The summary for the report is both interesting and informative enough to be worth verbatim reproduction, so here goes:
We present the first large-scale analysis of hardware failure rates on a million consumer PCs. We find that many failures are neither transient nor independent. Instead, a large portion of hardware induced failures are recurrent: a machine that crashes from a fault in hardware is up to two orders of magnitude more likely to crash a second time. For example, machines with at least 30 days of accumulated CPU time over an 8 month period had a 1 in 190 chance of crashing due to a CPU subsystem fault. Further, machines that crashed once had a probability of 1 in 3.3 of crashing a second time. Our study examines failures due to faults within the CPU, DRAM and disk subsystems. Our analysis spans desktops and laptops, CPU vendor, overclocking, underclocking, generic vs. brand name, and characteristics such as machine speed and calendar age. Among our many results, we find that CPU fault rates are correlated with the number of cycles executed, underclocked machines are significantly more reliable than machines running at their rated speed, and laptops are more reliable than desktops.
Lest you be inclined to pooh-pooh this report and its contents, it’s probably worth observing that it received the “Best Paper” award in the ACM’s (Association for Computing Machinery, a leading computer-science professional organization, to which I have belonged since 1982) Proceedings of Eurosys 2011 conference publication.
Joel Hruska from ExtremeTech overviews its findings in an excellent story entitled “Microsoft Analyzes over a million PC failures, results shatter enthusiast myths.” I’ll summarize the high points here:
- The longer a CPU runs, the more likely it is to crash. Machines with less than 5 days of active use over an 8-month period (what MS calls Total Accumulated CPU Time, aka TACT) have a 1:330 chance of crashing. Machines with over 30 days of TACT over the same 8-month period have a 1:190 chance of crashing.
- Once a hardware fault appears, it is 100 times more likely to recur after that. 97% of machines tend to crash from the same cause within a month of the first such crash.
- Over-clocking (no surprise there) is likely to cause crashes, while underclocking makes them less likely. Figure 3 from the report summarizes overall overclocking findings. For underclocking CPU failures go from 1:330 for stock to 1:460 for underclocked; DRAM one-bit flip errors drop from 1:2000 (stock) to 1:3600 (UC); and disk issues drop from 1:380 to 1:560. This also confirms conventional wisdom that underclocking improves PC reliability (it definitely reduces heat output, which is probably related).
- Surprisingly to some (but not to me, based on lots of hands-on experience) laptops proved to be more stable than desktops, countering the researchers’ own expectations.
- PCs from major systems vendors (such as Dell, HP, Asus, Lenovo, and so forth — defined as the “Top 20 computer OEMs” in the report) proved more reliable than those from all other vendors, with 1:120 for CPU problems (OEMs) versus 1:93 (everybody else), and 1:2700 (OEMs) for RAM one-bit flip problems versus 1:950 (everybody else).
All in all the report makes for some interesting reading and suggests that MS may be learning more from this data in the aggregate, however unresponsive their forwarding of problem solutions through the Action Center might seem. Should be interesting to keep an eye out for future such findings.
An interesting story appeared in Extreme Tech earlier this week. Entitled “The fanless heatsink: Silent, dust-immune, and almost ready for prime time,” it digs into a recent invention from the eggheads at Sandia National Laboratories called the Sandia Cooler heatsink. The reason why I designated the fanless heatsink terminology as “so-called” in my blog title is because calling this technology fanless is something of a misnomer. Actually, the heatsink’s heat dissipator IS a fan, so no additional fan is needed to provide cooling for this particular device, as shown in the photo below (reproduced from the Extreme Tech story cited above):
According to the Extreme Tech story, there are lots of interesting wrinkles to this invention, which sounds pretty much like a “gotta-have-it-yesterday” PC technology to me. First, it’s reportedly 30 times more efficient than current heatsinks. Second, it uses a “cast metal impeller” that floats 0.03 mm above a metal heat pipe spreader, and is powered by a brushless DC motor integrated into the unit itself. Third, the impeller is extremely quiet (no sound measurements provided, but even the quietest of fan-based coolers emit at least 30 dBA, so I’m guessing it’s in the lower 20s if not quieter than that — basically inaudible, especially in typical household or office environments which usually feature ambient noise levels in the 35-45 dBA range). There’s a video in the linked story (above) that shows how quiet it is, and that’s pretty quiet indeed. Fourth, the impeller has been designed to resist dust build-up owing to a constant rotation at 2000+ RPM, and use of centrifugal force to drive dust out of the air gap between the heatsink (the impeller) and the heat spreader (the metal heat pipe spreader attached to the processor package). Fourth, the Sandia folks estimate that “if every conventional heatsink in the US was replaced with a Sandia Cooler, the country would use 7% less electricity.”
OK, I’m sold, but apparently it’s going to be a while before this technology makes it into a commercial cooling product. According to the story, a company has licensed the technology for PC cooling, but that company hasn’t been identified nor any such products announced. Rats! I was hoping to rush right out and buy some immediately. Let’s hope it doesn’t take toooooooooo long to come to commercial fruition. Gotta have it!!!
Last February, in getting ready to work on a Windows 8 book — now abandoned, alas, in favor of other work — I purchased a couple of Lenovo notebook PCs. My X220 Tablet has become my go-to touchscreen Windows 8 test PC, and my T520 notebook has proved itself to be a solid and dependable traveling PC as well. In learning all about my Lenovo units, I’ve become familiar with a class of compact solid state SATA drives that use a special Mini-SATA or mSATA connector, not least because both of these notebooks will accommodate an mSATA SSD in the same slot into which you might otherwise plug a WLAN card.
(Photo from Wikipedia entry, Wikipedia commons, Author Bdortiz1076)
mSATA is essentially the same form factor as mini-PCIe (PCI Express Mini Card interface) and is becoming increasingly popular for SSDs. Right now, all of the major vendors — including Intel, Samsung, OCZ, SanDisk, ADATA, Transcend, SuperTalent, and so forth — offer mSATA SSDs in capacities from 20 GB to as large as 256 GB. They tend to be more expensive than their 2.5″ packaged counterparts, and some care must be exercised in picking units compatible with the chosen host PC. But I’ve had good luck with both of my Lenovo units in using an 80 GB Intel 310 mSATA drive, even if one must perform a clean install of the OS to get the machine to recognize the mSATA SSD as the boot drive.
For notebook PCs, the great thing about mSATA is that, when available, it provides an extra drive slot that’s perfect for a smaller (60-80 GB for a PC with 4-6 GB RAM; 120 GB or larger for a PC with 8 GB RAM or more) boot drive, which leaves at least one slot open for a conventional 2.5″ hard disk or SSD, depending on performance needs and budgetary constraints. What I really like about my T520 Lenovo notebook is that I can (and did) buy a $40 swap out, snap-in replacement for the optical drive module that lets me add another hard disk, for a total of 3 drives in that machine. Right now, I’ve got two 750 GB 7,200 RPM drives for storage, and a snappy 80 GB mSATA boot drive, for a pretty winning combination of speed and storage capacity.
But here’s another interesting news flash to consider as well: a growing number of motherboard makers — including at least Asus, Intel, Jetway, Gigabyte, and Zotac (see this Google search) — are selling modern mobos (mostly socket LGA1155) with built-in mSATA interfaces. These are smart enough to recognize mSATA devices in UEFI or BIOS, and to propel them to the top of the boot hierarchy by default. I’m starting to think that I might know what kind of motherboard I’ll be buying for my next desktop build, in fact…
Windows guru Paul Thurrot has posted an interesting and disturbing story about some problems he’s encountered recently with Windows 8 Release Preview. It’s entitled “Broken Windows? Two Serious Issues That Make Windows 8 Release Preview Almost Unusable (For Me).” In that story he recounts issues with:
- Hard Crashes — or what I would call freeze-ups — of the Windows 8 runtime environment. Anybody who’s worked on Windows for any length of time is familiar with this phenomenon, where for some reason, Windows simply stops responding to input. Sometimes, you can move the cursor around for a while but the system won’t respond to anything, not even CTL-ALT-DEL or CTRL-ALT-ESC. The only fix is to cycle the power on your PC and perform what Reliability Monitor calls an “abnormal” or “unexpected” shutdown. I’ve experienced this phenomenon many times myself, including once or twice on the Windows 8 Release Preview on my desktop test machine (but never on my Lenovo X200 Tablet).
- Networking failure: Thurrott reports an issue with large file transfers where the file copy window never closes or indicates copy complete (though the copy does conclude successfully), followed by network issues for the affected PC. It can’t access web sites, transfer other files, or access the network properly. The problem remains active even after resetting the network connection, switching cables, resetting local routers, and even after rebooting. After a while the problem goes away, but then recurs intermittently afterward. Weird. Disturbing. Scary, even. And Thurrott reports that several hundred other users have responded to his article with reports of similar experiences themselves. Thank heaven I’ve never experienced this issue as he describes it (though I have seen large file transfers seemingly stall for a while, they’ve never been followed by the litany of symptoms he describes in his story, and has had confirmed by fellow sufferers).
In the past month, I’ve replaced my previous production desktop — a still powerful and more than adequate QX9650 quad-core based machine that I installed with a 32-bit version of Windows 7 Ultimate, long, long ago whose memory limits had simply become intolerable for me — with a new production desktop — it’s built around an i7-930 Bloomfield (45 nm) CPU, but has a fast OCZ Vertex 4 180 GB SSD and 24 GB of RAM. At the same time, I also moved an old Dell All-in-One 968 printer up to my wife’s office upstairs, and plugged in a newish Samsung ML-2851ND laser printer in its stead.
Ever since I’ve swapped the printers out, my production desktop has occasionally hung or shut itself down automatically from time to time. Even more curiously, those events correlate pretty much directly with the use of the printer, and are more pronounced for long or demanding print jobs. I’d been scratching my head about this and wondering what was up until the solution dawned on me and I fixed the problem with the switch of a single power cord plug-in. All the information you need to solve this puzzle is included in this photo of the cable maze under the desk in my office.
My Smart UPS 1500 is an older UPS, but still works with the latest APC PowerChute Business Edition software (v9.0.1). But when I switched the old printer for the new one, I neglected to take into account the difference in power draw between the two units. The Dell is an inkjet which sips a very modest peak of 32W. The Samsung, though much smaller than the Dell unit, is a laser and pulls a much less modest 400W at peak consumption times.
I had literally pulled the plug for the AIO 968 and inserted the power cord for the Samsung ML-2850ND into the same receptacle. I hadn’t stopped to think that a laser draws significantly more power than an inkjet, because of the high-temperatures the fusing wire must generate to melt the toner and get it to stick to the page while printing. As soon as I pulled the plug from its receptacle in the octopus you see with the yellow plug ends at the right hand side of the photo, and plugged it directly into the wall outlet, all my problems were solved, solved, solved.
And guess what? The UPS is no longer beeping at me every hour or two (which it was doing each time the printer woke up, and apparently coursed some voltage through the fuser), either. I just wish the PowerChute software had been smart enough to tell me that it was getting excess voltage demands instead of just making noises. But that’s the way it goes sometimes when troubleshooting mysterious and intermittent system problems.
Among all of the many speedy SSD drives out there, the Samsung 830 drives are close but not at the top of the pack (see, for example, the Tom’s Hardware SSD Hierarchy Chart which puts it behind the OCZ m4 and the Intel 320 series). But when this Newegg deal hit my inbox this morning, I just had to share it with my readers because it’s a genuine “Killer Deal:” $90 for a nominal 128 GB drive (actually, more like 120 GB is reported in Windows Explorer as the drive size, thanks to the difference between 1 billion bytes in decimal and 1 GB in binary).
It’s a great add-on to just about any PC, be it a boot drive for a desktop, or a general replacement drive for a notebook. If you’re in the market, and have the cash to spare, this really is too good a deal to pass up. Follow this Newegg link to get to the Samsung 830 128GB product page, and don’t forget to use the Promo Code EMCYTZT1751 as you work through the payment process.
Earlier this week (June 6) at Computex in Taipei, Taiwan, Microsoft Corporate VP Steven Guggenheimer announced that Microsoft has sold over 600 million licenses for Windows 7. Hmm. Let’s do some calendar math: This OS shipped on October 22, 2009. That means 3 months in 2009, 12 each for 2010 and 2011, plus 6 for 2012, for a total of 33.5 months of sales all told. That translates into average monthly sales of just over 17.9 million copies per month for every month since the product hit the streets.
I can remember months when total copies sold ranged between 20 and 25 million per month earlier in the sales cycle, but that’s still pretty darned impressive. Considering that 400 million copies of XP were in use in January 2006 and that Microsoft stopped selling this incredibly popular Windows version in January, 2009, it’s possible that there may be more than 600 million copies of XP still in use today, but I’d be surprised if there were more than 800 million overall. These waters are also muddied somewhat by Microsoft’s Windows XP Mode giveaway, which puts a free copy of a Windows XP VM into the hands of any owner of Windows 7 Professional, Enterprise, or Ultimate who decides he or she wants one.
Last month’s Desktop Operating System Market Share numbers from NetMarketShare.com (May, 2012) seems to bear out my analysis. It still shows XP with a 3.34% edge in marketshare over Windows 7, with all other OSes combined comprising less than 15% of all desktop seats:
Windows 8 will certainly make all of this extremely interesting, probably right about the time that Windows 7 celebrates its third birthday this October. My guess is that Windows 7 will surpass, and quite possibly even eclipse, Windows XP, sometime in the next two to three years. Whether Windows takes off and runs in Windows 7 fashion, or limps along as Windows Vista did instead, is still anybody’s guess. But even though most of the experts I respect have learned to appreciate Windows 8 (especially Paul Thurrott and Ed Bott) though I don’t think anybody is expecting it to be as successful as Windows 7 has been, is right now, and will continue to be.
File this one under the heading of “Another Windows war story.” It dwells on strange shenanigans, and lessons learned, in switching over from an old, familiar, and reasonably stable desktop to a newer and snappier, almost unknown, and possibly stable replacement desktop. My biggest reason for making the switch comes from increasing use of virtualization, where 4 GB of RAM just doesn’t cut it any more. And then, too, there’s always the chance to get a bigger, faster, more powerful machine any time you make such a move.
Right now, I’m almost through migrating from my three-year old production PC (Gigabyte X38-DQ6 mobo, QX-9650 quad-core CPU, Windows 7 Ultimate x86, Intel 80GB SSD, and 4 GB RAM) to my year-old former test machine (Asus P6X58D-E mobo, i7 930 CPU, Windows 7 Professional x64, OCZ Vertex 3 SSD, and 24 GB RAM). I had overclocked the test machine to see how fast I could push the i7 930 Bloomfield processor it contains. Rated at 2.8 GHz, I got it to 3.8 GHz with what I thought was a reasonable degree of stability, and pushed the 667 Mhz memory to 800 MHz without any signs of instability as well.
But alas, those conditions persisted only until I switched the machine from test to production duty, and really started hammering away at it. And of course, I started hanging the typical plethora of peripherals most production machines tend to acquire (and with which very few test systems must ever contend): two 27″ monitors, a laser printer, USB keyboard and mouse, USB media card reader, 2 USB external drives (1 USB2, the other USB3), 2 eSATA external drives along with two more internal 1 TB+ conventional hard disks, and a high-end Axiom audio output rig to my speakers.
I also doubled up the memory in the unit–this mobo uses tri-bank memory, so I’d inserted 3×4 GB DIMMs for 12 GB of total RAM for testing. Another trio of the same memory modules (G.Skill F3-12800C19-4GBRL units that run 9-9-9-25-34 at 667 MHz) brought the total RAM configuration up to 24 GB, now running quite nicely on my new production desktop. Here’s a snap of CPU Monitor showing the new clocking and memory size:
On Sunday morning, when I sat down to the machine to search out and install Samsung’s own latest driver for its ML-2851ND laser printer it started shutting down on me when I’d finished my task and tested how well it was working. Because the Devices and Printers widget in Control Panel appeared to have returned to normal operation, I didn’t think it was driver-related. My suspicions that the print driver wasn’t the culprit were confirmed when (a) I succeeded in printing test and other pages without difficulty and (b) when the machine continued to shutdown and crash intermittently over the next two hours as I got into troubleshooting mode.
Having seen weird behaviors in the past on Gigabyte Motherboards (in the ICH3 – ICH7 era) when all memory slots were populated, I first tried removing half the RAM to see if the system would stabilize. No joy. Next thing I did was to jump into the BIOS, turn off the overclocking for both CPU and the memory channel, and presto! Everything settled down to its usual rock-solid behavior, so I made a disk image. After installing a bunch of useful but not mission-critical utilities to give the system a workout, I realized that stability was restored. And in the 20 hours or so it’s been since I re-inserted the 3 new RAM sticks, the machine has continued to run without any serious hiccups (other than a disconnected wireless mouse transceiver that fooled me into a forced shutdown), as shown in my current Reliability Monitor graph:
Before I started migrating on 5/28, the test machine showed nothing but solid “perfect 10″ performance. Once I started installing new devices and driver on 5/28 (the first big dip in the curve) I shook things up with a Windows hang, and a couple of major issues with my Dell AIO968 drivers (that printer is now happily attached to my wife’s PC upstairs, where we now use it only for printing color output). Configuring various applications — Outlook, mostly — got me dinged once, and realizing that the ML-2851ND driver I downloaded from DriverAgent was hosing my machine cost me a couple more hickeys as well.
Yesterday, I got dinged when trying to remove the old ML-2851ND driver caused a system crash, and then again when the system started spontaneous shut-downs immediately thereafter. I still have issues with the video driver for my GeForce GTX 460 shutting down right after system startup, but the PC recovers quickly and without discernible side effects, so I’m OK with waiting to identify and install a more stable driver for that graphics card.
Otherwise, returning to safe clock settings for CPU and RAM seem to have brought things to a quiet, steady level — just the way I like them. And now, the new production machine is starting to feel like a real production machine, indeed.
OK, so now I’ve got a few more hours with the latest Windows 8 release under my belt, and so far, so good. My biggest initial beef with the latest release is that it seems not to have absorbed too many new drivers in the time between the Customer Preview and the Preview Release. Now that I reflect back on my experience in bringing the latest release up to snuff as compared to the previous one, driver clean-up is still a little to intense unaided. Of course, now that I’ve done it once I can use Sysprep to building a reference installation and then add those hard-found drivers so that I don’t have to go looking again. I wish somebody would tell me about how to do that across minor releases for Windows, or clue me into some automagic tool I have yet to discover on my own.
The current release definitely boots up, resumes from sleep or hibernation, and shuts down more quickly on the same hardware than it did for the previous release. It also seems more responsive to “edge gestures” on the touch screen — a big relief to me — than the previous release felt as well. And in general, the touch interface seems easier to use (though I’m not sure if that’s because I’m more familiar with the Windows 8 UI by now, or if it’s a real change to my systems’ touch behavior).
I restored my favorite desktop gadgets, too, only to learn that one of my personal favorites — it’s called Vista Shutdown Control 2 — is munged on C|NET where it shows up as only 227 bytes there. It’s 425 Kbytes in actuality: the best working link remains on the old Vista-era Microsoft Desktop gadgets page which had been taken off-line some time back, but is now up and available again (there was a period late last year when MS said “No more gadgets!” and took the page down, but they’ve definitely reversed course on this for Windows 8, so gadgets are back in full force, and I’m very glad about this: they remain pretty handy desktop status or quick access indicators, Metro or no Metro notwithstanding).
Next, I’ll start playing with the much-touted improvements to the Metro apps. Looks like I’m going to have to sign up for Exchange-based e-mail (been wavering on this for some time for lots of other reasons, but I can’t see any way to fully explore what Win8 can do with mail without this capability). Had to hit the Escape key to get the bottom-of-screen menu to show up in SkyDrive (so I was a little non-plussed until I recalled that essential detail).
I keep running into some interesting desktop behaviors that surprise me from time to time, too. For example: I can’t just right-click an Explorer item to call up its associated menu. I must first left-click to select the object, then right-click to provoke the options menu for that object. Thanks to a new touchpad driver, however, my issues with accidental UI element tear-offs has stopped (that was a true PITA).
I’ll keep digging, and keep reporting back. Again, my current opinion remains: so far, so good.