In a fascinating phone call with Samit Patel of App-DNA.com last Monday (12/13/2010) I learned more about that company’s Windows applications analysis technology called AppTitude, specifically in light of ongoing, planned, and upcoming enterprise-class migrations from older Windows desktop versions to Windows 7. Along the way, I also learned that the same technology that App-DNA brings to operations wishing to streamline and manage that migration process will also work for other likely migrations, including virtualization efforts aimed at Hyper-V, VMWare, Citrix XenApp, Xen Desktop, and so forth (in fact, AppTitude won “Best of Show” at Citrix Synergy 2010).
Let me be very clear as to why I describe AppTitude as an analysis tool rather than a migration tool, even though it can be an important or even invaluable part of the migration process. AppTitude does not actually do migration; rather, it analyzes elements from what Patel calls an “enterprise’s definitive software library” (the collection of installable images, programs, and executable files it uses to generate desktop environments for end users) to determine where potential software conflicts or problems might exist, and then to implement automated remediation strategies or techniques to address them.
Most migration tools work by observing runtime behavior of specific instances of desktop environments, but AppTitude constructs a static virtual model for each of the elements it finds in an organization’s software library. This permits the tool to leverage information from packages or items that would normally be handed off to IT professionals for packaging and/or deployment, and to decompose underlying software API calls and object references to find dependencies within the code that may or may not be compatible with a Windows 7 runtime environment. Says Patel: “The real value comes from going beyond MSI tables for portable executables to analyze headers and other data, and to access pre-installation (PE) data to see if it will run or not.” This technique does not require actual installation, so no observation of runtime behavior is needed. In fact, the 60,000 or so data points that AppTitude collects for application library entries suffice to answer these all-important questions:
- Will it install?
- Will it run properly (or at all)?
The biggest problem with other tools and approaches that require migration by trial and error, or by runtime observation, is that this approach cannot possibly follow all potential cases or use scenarios. Patel offered a telling example of how an application that might be subject to batch script invocation every other Sunday wouldn’t manifest problems in a runtime environment unless testing happened to occur on that particular day, or run long enough to come up against this schedule (neither of which is terribly likely, he also opines). Only a tool like AppTitude that systematically examines every possible cross-reference, all internal code structures and external references, and scripts or other automated processes that might invoke the application can possibly catch potential gotchas that could emerge.
The approximately 60,000 data points that AppTitude gathers about applications are what represent the so-called “application DNA” that gives this company its name, and its tool so much punch. Though AppTitude doesn’t actually DO migration per se, it can (and has) helped large organizations anticipate and remediate potential issues when conducting the migration process. Patel indicated that use of App-DNA resulted in price reductions of up to two-thirds for competitive bids for extremely large migrations, as opposed to more labor-intensive and time-consuming approaches to performing migration using the runtime approach. Even better, those winning bids also included timelines that were less than one-third as long as the losing bids based on runtime methodology.
App-DNA prices its offerings on a per-application basis, but enterprise licenses based on the total number of endpoints (as is the case with Citrix Xen and other desktop management environments) are also available. With Windows 7 spearheading the need for future migrations, Windows 8 in the offing for 2014, and virtualization or mobile interface migrations also in the mix in many enterprises, AppTitude appears to offer the opportunity to pay for itself many times over for organizations with thousands of desktops and hundreds to thousands of applications to manage (and migrate).
Holy Moly! I just took a quick look at the Microsoft Security Bulletin Advance Notification for December 2010 (this is a temporary placeholder for the actual security bulletin, which will be released at the same time Microsoft posts its updates to the Windows Update Service, so the final bulletin may be in place by the time you read this). There are 17 security updates in the queue for this month, which is certainly the highest number I’ve seen. In fact, according to Mark Reavey of the Microsoft Security Response Center (MSRC) this is the highest number of updates ever released on a Patch Tuesday. See his MSRC blog “December 2010 Advance Notification Service is released” (12/9/2010) for some interesting information about total bulletin counts, vulnerabiliites covered, and information security trends.
Among the most interesting tidbits from this blog is the declaration that Microsoft “…will be closing the last Stuxnet-related issues this month. This is a local Elevation of Privilege vulnerability and we’ve seen no evidence of its use in active exploits aside from the Stuxnet malware.” Likewise, an older (reported in November 2010 in MS Security Advisory 2458511) Remote Code Execution vulnerability in Internet Explorer that affects versions 6, 7, and 8 will also be addressed in the December security updates. Finally, Reavey also points to an interesting article from Microsoft Security Research & Defense entitled “On the effectiveness of DEP and ASLR” (DEP is Data Execution Prevention, and ASLR is Address Space Layout Randomization, two techniques Microsoft uses to good effect to limit the impact of exploit attempts, especially those that seek to leverage buffer overflow weaknesses).
It will be interesting to read more details about this month’s security updates when Microsoft posts them to its update servers at about 11 AM Pacific time today (-08:00 UCT). I’ll post further on what’s in the mix in a follow-up blog tomorrow.
When Alice heads off to Wonderland, she does so by falling down a rabbit hole, and experiencing a sequence of dizzying and terrifying transitions from top to bottom. I had a similar experience this morning, as I tried to find a newer Wi-Fi driver for the Intel Centrino Advanced-N 6200 AGN interface device in my HP dv6t-2300 notebook PC. My first stop, of course, was at the HP support pages where, to my ultimate consternation and confusion, the Intel device was not listed among the options available for this notebook make and model. My immediate reaction: “WTF?!” was followed by extended poking around to see if I could find through other means what I couldn’t get by politely knocking on the front door. No dice.
My next move was to engage HP support in an online chat session to see if they knew something I didn’t. After going round and round with a very nice support tech, I did learn that my system was part of a special build put together for a promotion at the beginning of 2010 (I bought my system in February 2010 for just over $1100, in a configuration that HP no longer makes today, where neither the i7 Q720M processor nor the Intel 6200 AGN interface is available on current models). In fact, this particular tech couldn’t find any drivers for the wireless interface that was installed on my machine at all.
So I transitioned to a phone call, to a location coded PH (which Bing cheerfully informs me is in the Phillippines), and got down and dirty with another support tech instead. Interestingly, the HP support tech on the phone who picked up from his online counterpart couldn’t use the service ticket number the chat tech gave me to identify our session, but he was able to find the ticket using my phone number. Go figure…
After double-checking my product id and serial number six ways from Sunday, he told me my machine definitely included the aforementioned Intel Wi-Fi interface but that the general information for my make and model didn’t. He did some more poking around, and eventually found a source for the drivers that I hadn’t considered while conducting my own search — namely, the “reload drivers” function inside the HP Advisor software that HP installs on all of its PCs. Alas, however, this turned out to be driver version 126.96.36.199 when I was already running version 188.8.131.52. I am also aware that Sony has published version 184.108.40.206 thanks to my handy-dandy DriverAgent.com subscription, even though HP is apparently still lagging behind (and I can neither install the Sony driver on my HP as you might expect, nor can I extract sufficient “driver goodies” from the Sony Installer file using Universal Extractor to get the driver files that way, either).
The ultimate outcome of the adventure was to realize that HP doesn’t necessarily have access to information that’s any more current or correct than what’s available on the Web (even though they are the gatekeeper to this particular driver, as the Sony version won’t install on my HP machine, and the Intel Driver Update Utility says that I must contact the system vendor to get drivers for the 6200 AGN device that provides me with wireless network access).
As I’m watching my wireless bandwidth on the HP notebook on the one hand, and on my el-cheapo Asus EeePC 1000HE on the other, both connected to the same 802.11n WAP in my bedroom closet, I’m also struck that bandwidth on the HP box is bouncing from a low of 13 Mbps to a high of 130 Mbps, while the Asus stays solidly pegged at 150 Mbps when both units are literally sitting right next to each other. I was really hoping a new driver might help take care of this and put the more expensive machine on a par with the cheaper one, but apparently the Atheros Wi-Fi interface works a lot better than the Intel one. Maybe I’m going to have to look into antennas, machine placement, and yada yada yada. Sheesh!
For those of you who don’t already know, I’ve got a bit of an HTPC Jones (that’s “Home Theater PC” BTW). I’ve written a book with audio-video-PC guru and SilentPC.com site operator Mike Chin (Build the Ultimate Home Theater PC) and another book about the stellar, Linux-based media PC environment named MythTV with FAQ guru Jared Wilson (Hacking MythTV). So when I found this year’s ARS HTPC Guide: December 2010 I read it over with both anticipation and delight. Not only is ARS Technica a great source for breaking Windows news and rumors, they also offer solid, informative technical content as well (along with partner arm Orbiting HQ).
For those of you readers who might be interested in putting together a state-of-the-art HTPC system for the end of 2010, you’d be hard-pressed to find as useful, cogent, and up-to-date a set of specific hardware and software picks and recommendations as you’ll find in this guide. It offers a slate of various hardware options for CPU, motherboard, memory, the all-important digital TV tuner card, storage, Blu-ray, remote, and so forth, that should let even relative DIY novices acquire and assemble a kick-butt HTPC system for a relatively modest budget (around $1150) and without killing themselves in the process.
While there’s a recommended selection in each category, there are also enough options discussed so that those who may not like (or be able to afford) the first choice in each area will have plenty of other items to ponder. The discussion of the TV Tuner card (an element that can easily make or break your HTPC experience) is right on the money (but expensive: it’s a CableCARD unit that retails for $400). Case, PSU, and remote control discussions are equally helpful.
If your thoughts, plans, or wishes for a home theater system include a PC in the mix, check out this guide. It’s definitely worth a read, or even a spot in your favorites or bookmarks.
I regularly read Paul Thurrott’s Supersite for Windows, so when I saw a story there entitled “HP Drops Windows Home Server Product Line” I thought to myself “Bummer!” Here’s a capsule summary of what Thurrott says in this story (it’s a paraphrase, not a direct quote, so I set it in italics here): In the wake of Microsoft’s recent announcement that it would drop its Drive Extender technologies (these provide automatic data redudancy across a pair of hard drives, and create an extensible storage pool with a single drive letter that can be expanded by adding drives to the system), HP has indicated it plans to discontinue its MediaSmart Server (MSS) products). And indeed, I am quite sad to see this product leave the marketplace. It offers SOHO users a rare combination of great features, reliable storage, and a pretty bullet-proof runtime environment at an affordable price. Even aside from its media management capabilities — which are pretty good, and kept getting better — the MSS boxes do a peachy job with automated backup, and low-maintenance network file storage, with a usable publicly-accessible Web interface from the Internet included at no extra charge. Good stuff!
But when I went looking for confirmation of this planned change, I didn’t have far to look. CNET also has a December 1 story entitled “HP discontinues MediaSmart Server line.” This is further confirmed on the Microsoft Home Server blog for November 30, which simply states that “…HP has told us they do not plan to provide a platform for Windows Home Server code named ‘Vail.’ HP has told us they will sell the existing version of MediaSmart Server through the end of calendar year 2010…” Thus, the end is no longer too far off, either.
I’m sorry to see this product go, as I spent many enjoyable hours digging into these systems and tweaking their hardware and software. It was a cute little box, too. Maybe I should try installing Windows Server 2008 on my box and see how it does in that capacity. Too bad: another one bites the dust!
Nir Sofer of Nirsoft has written lots of great utilities, several of which I use pretty regularly. Recently while looking for information to compare UFD speeds (UFD stands for USB Flash Drive, for those not already hip to this abbreviation) I was guided to a page that Sofer set up to report on the results of a somewhat recent addition to the excellent USBDeview program that you can download for free from his site. If you go looking for it yourself on the linked page, be patient: you need to scroll all the way down to the “Publishing Your Speed Test Result” heading to get to the link at http://usbspeed.nirsoft.net. Here’s a UI view of the program from my desktop:
The cool thing about this utility is that it has all kinds of snazzy, user-callable command line capabilities as well as the basic GUI you see here. This is cool because it lets people use the tool to perform various kinds of tests and measurements including a basic UFD speed test that reads and writes a large (1 MB) file to and from the device to provide a rough’n’ready metric for its read and write speed. Sofer has also posted results for hundreds of such drives on his site and you can use this info to compare devices to each other (actual speeds will vary depending on the speed of the USB interfaces into which devices get plugged and the chipsets and controllers that manage them — but this is useful, because as long as those elements remain the same, users should get the same relative speeds from devices they look at in Sofer’s list, though their actual performance will vary).
Check it out: it’s pretty neat!
Everybody’s heard about the Stuxnet virus by now, built specifically to attack Siemens’ SCADA systems through one of its most popular programmable logic controllers (PLCs). At the most recent Virus Bulletin conference in Vancouver, BC, in late September 2010, researchers from Symantec reported their findings about this fascinating and complex threat. These findings included their determination that Stuxnet includes “…the world’s first-ever tookit designed for…” PLCs (SC Magazine, October 8, 2010) and that the complexity of the malware involved “…would have been written using 5-10 core developers over six months and tested on systems mirroring the process control hardware” according to statements attributed to Symantec researcher Liam O Murchu at that conference (ibid). In fact, for the attack to work, the Stuxnet developers “…would have needed to teal digital certificates used to sign driver files used in target systems” (ibid).
Clearly, this is not the work of a single alienated cracker with too much time on his or her hands (O Murchu puts his assessment in pithier language: “This is not a teenage hacker coding in his bedroom-type operation”). Because the attack apparently affected much of Iran’s nuclear development infrastructure, in fact, many people inside and outside that country see government funding (if not an outright government-led “black op”) behind the Stuxnet virus. Israel and the US lead the list of likely culprits, though proving such involvement is also nearly impossible.
But where things get interesting is in the byplay that follows disclosure of such technical analysis and information. The n3td3v IT Security Consultancy in the UK, which is the brainchild of a well-known and eccentric self-professed security “expert” named Andrew Wallace, posted this response to the aforecited SC Magazine article:
“Motivation behind Stuxnet.” BP lobbied for the release of the Lockerbie bomber, and the people responsible for Stuxnet wanted to make sure they paid. To make sure the oil deal from releasing the bomber, BP couldn’t make a profit from. Stuxnet targeted the oil well. There were a lot of unhappy people after the release of Abdelbaset Ali al-Megrahi. Abdelbaset Ali al-Megrahi was convicted for blowing up Pan Am Flight 103 over Lockerbie, Scotland, on December, 21, 1988. He was freed on compassionate grounds by the Scottish government on August, 20, 2009. The claim was he had terminal prostate cancer and was expected to have less than three months to live. It was a lie and he is still alive living the life of riley in Libya.
Originally posted by me at http://www.schneier.com/blog/archives/2010/10/stuxnet.html#c467887
[Note: other postings on the Schneier blog are more coherent and intelligible, and have lots of interesting things to say about the affected Siemens PLCs.]
In fact, nt3td3v is pretty well-known in the security community because his identity serves as the focus of BlackHat study from 2006 entitled Who is “n3td3v”? Andrew Wallace has even had his psychological profile “done” on the full disclosure list upon which he made something of a pest of himself in that time frame. But as interesting technical events unfold on the information security stage, there’s apparently always a temptation to exploit the notoriety and the publicity that surrounds spectacularly successful (or mysterious) exploits like this one. Who’s to say if this kind of epiphenomenon doesn’t make the whole situation still more compelling than it already is?
Great article posted in today’s Computer Business Review (11/29/2010). It’s an interview with the CEO of App-DNA entitled “‘Migration means more automation’: Q&A with Mike Welling…” While I’d recommend reading through the whole article to catch all the details — and there are serveral important items many readers will want to learn more about — here’s my capsule summary of what this fascinating story contains:
- The story begins with a nod to a 2009 Gartner study that estimated the costs of migrating from Windows 2000 or XP to Vista or 7 at “three to four times the cost of upgrading from Windows Vista to Windows 7 because of application remediation and replacement cost.” Numbers cited vary from $1,035 to $1,930 for the big jump versus from $339 to $510 per user for the smaller jump.
- App-DNA’s product, AppTitude, helps to automate compatibility testing for the thousands of applications in use in a typical enterprise that might be contemplating a major OS upgrade, platform migration, or virtualization effort. Big names who’ve used this technology to good effect include BAE Systems, British Telecom (BT), Exxon Mobil, and Barclays.
- Numerous big customers (names withheld) have experienced cost reductions when using AppTitude to focus and guide migration efforts from 50 – 75% of original estimated costs. Other outfits cite ongoing annual savings of $3M per year thanks to AppTitude.
- The “DNA” terminology comes from detailed analysis of common software components in applications, to build a database that captures somewhere around 80,000 data points around individual applications. This permits incredibly detailed profiling, and equally accurate assessments of potential compatibility issues.
As I said in the lead-in ‘graph, see the original story for more details and info, or visit the App-DNA Resources page for Windows 7 application migration checklists, workbooks, case studies, plus eBooks and white papers.
I’ve long been a fan of the Secunia vulnerability scanning and patching alert tools, known as the Personal Software Inspector (PSI) in its free for individual, at-home use version, and the Corporate Software Inspector (CSI) in its for-a-fee version for workplace use. A beta version of the next generation of PSI has been out for at least a couple of months now, but I finally got around to installing and working with this tool, and I very much liked what I saw (warning: on one of my 64-bit test machines, I had to explicitly use the right-click “Run as administrator” option to get the program to install properly; be prepared should this happen to you, or should you encounter difficulties the second time you run the program).
Here’s a snap-by-snap recitation of the install and first run processes for this nice piece of software, available for download as the PSI 2.0 BETA:
In terms of overall functionality — except for the program’s new auto-update facility, which allows it to handle downloading and installing updates without requiring user interaction — there isn’t much else new about the 2.0 beta version of PSI. What is new, however, is a complete reworking of the user interface that is much cleaner and easier to follow and that does away with the former versions’s Simple and Advanced UI modes, probably because the redesign makes that distinction moot. Check out the program and see what you think: I’m looking forward to the commercial release myself!
If you work with solid state disks, you’re probably already familiar with the various tools that your drive vendors provide for their units. Mostly, these are tools for checking and upgrading firmware, but occasionally, you’ll also come across a great tool like the Intel SSD Toolbox as well (note: a new version of this tool — v.2.0.1.000 — was released on October 19, 2010, so if you haven’t grabbed it yet follow the link and do that right now).
But there is at least one vendor-neutral tool that’s also worth adding to your system admin/troubleshooting toolbox if you work with SSDs — namely, Crystal Dew World’s (how the Japanese come up with these weird and wonderful Website names continues to amaze and delight me) CrystalDiskInfo utility can help with several key items of information:
- Firmware revision: This tells you the version number for the SSD firmware installed on the drive you’re inspecting. This can be a key element in obtaining the best possible performance from an SSD, and is information worth knowing
- Supported Features: This tells you what advanced features are turned on for the drive you’re inspecting. The TRIM feature is probably the most important item to look for. TRIM provides erasure optimization for SSDs, and allows blocks of data to be flagged for erasure and re-use, and permits garbage collection to be deferred until a convenient time, while also permitting the drive to manage its free space internally and to make sure it can generally provide blank pages for writing to satisfy pending write requests — SSDs can write to occupied pages, but they must erase those pages before writing can occur, which slows writes down. Likewise SSDs write data at the block level, not the page level, so writing requires special handling especially when used in tandem with write-leveling algorithms used on SSDs to keep “wear” even across the entire disk.
- Other features you’re likely to see turned on for PC SSDs include: SMART (Self-Monitoring, Analysis, and Reporting Technology, a monitoring system common on most hard disks and modern storage devices, including SSDs), 48bit LBA (48-bit logical block addressing introduced to support a liner addressing scheme on hard disks introduced with ATA-6 in 2003), and NCQ (native command queueing, a technology for improving SATA hard disk performance by enabling the disk firmware to opimtize the order in which it satisfied read requests).
- Other features you won’t find on SSDs, but will find for conventional hard disks are APM (Advanced Power Management, used to turn down power consumption on conventional spinning drives when they’re idle, but unnecessary on SSDs) and AAM (automated acoustic management, used to keep the noise that spinning drives can emanate to a minimum, also unnecessary on SSDs, which have no moving parts). You also won’t see temperature reported for SSDs, though such information is customary on SMART hard disks.
CrystalDiskInfo shows all of these things, and more, as you can see here:
A bit more data is presented for conventional (spinning) hard disks, like this Samsung 1GB SpinPoint drive, including temperature information, and lots of sector handling stats:
Best of all, this tool is freeware, and thus can’t strain your tools budget even one little bit. Check it out: you’re bound to like it. The same site also offers other free tools as well, and will reward the download and playtime required to learn them.