Now here’s an interesting concept I came across on Gizmag (a very cool site itself). It’s a “zero client” system by Pano Logic with “no processor, no operating system, no memory, no drivers, no software and no moving parts.” I don’t know about zero client since there’s still a 2″ x 3.5″ x 3.5″ box sitting on your desk, connected to your monitor, keyboard, network, and audio, but it’s a neat concept for clearing desktop clutter nonetheless. Like any other “thin” client, the Pano device also requires a server-side component to connect to for processing, storage, etc.
This reminds me of the NC. For those of you who were in the industry in the mid-90s, you’ll remember the network computer (NC) revolution. It was going to change the world, how we used computers, how we operated our businesses…and then it fizzled out like many other neat IT concepts. Now that virtualization has a strong grip on the industry, I suspect something like the Pano device could take hold as well. Maybe an acquisition target for Microsoft or Google? I’m a bit skeptical but I love this kind of innovation – it’s what makes the world go ’round…and keeps us employed!
After years of Microsoft playing the Goliath to Apple’s David, the tables have officially turned: Apple’s market value hit $222.1 billion, besting Microsoft’s value $219.2 billion and making it the most-valuable technology company.
And while two imaginary numbers being punted around by investors doesn’t impact IT directly, the changing of the guard has made a significant impact on workplace technology and how it’s managed.
But first, what hasn’t changed: Continued »
While VMWorld’s full content catalog won’t be released until June, the preliminary topics are out now and have attendees buzzing about where the conference, scheduled for August in San Francisco, is headed.
As InfoWorld.com reported:
Is VMworld still the premiere virtualization trade show it once was? Or is it now becoming a cloud event? … As hypervisor becomes (or many argue already has become) a commodity, VMware has to take it to another place or another layer: the cloud.
Perhaps organizers were influenced by the recent announcement that VMware is now one half of an unlikely couple with Google. InfoWorld’s David Marshall goes on to state that VMware’s acquisition of SpringSource was a factor in last year’s VMworld, despite its less-than-relevant status among conference-goers. Will Google and VMware’s partnership to take over the cloud computing market dominate VMworld 2010’s conversation? Four of the eight sessions suggest yes:
- Hybrid and Public Cloud
- Private Cloud—Management
- Private Cloud—Business Continuity
- Private Cloud—Security
Virtualization pros needn’t panic, however, since individual sessions are still virtualization-centric or, as Marshall describes it, virtualization wrapped up in a “nice cloud package.”
Cloud computing and virtualization are the buzz words of the year, so it comes as no surprise that one is complementing the other in a setting like VMworld. Don’t let the presence of competing vendors fool you: VMware is the core of VMworld, and with VMware’s exploration of the cloud comes its implementation of the cloud using its honed virtualization tool set. In its seventh year, it makes sense that VMworld is evolving.
We at the IT Watch Blog appreciate irony as much as the next guy, but IBM’s recent faux pas might be pushing it. The AusCERT conference boasts that it is the “premier IT security event for IT security professionals and anyone with an interest in IT security.” Maybe so, but that doesn’t mean conference-goers are immune from USBs infected with malware, especially not when IBM’s handing them out. Soon after the conference, IBM sent out an email informing conference-goers of the possible infection of every USB drive they distributed. Almost a week later, Sophos has confirmed that the drives contained two worms: W32/LibHack-A and W32/Agent-FWF. The malware affects Windows systems via autorun and autoplay as soon as the device is plugged in.
If you’re not learning, you’re not living, and it seems malware a la USB is a lesson worth relearning. McAfee’s quarterly threat report [PDF] listed “generic removable-device malware” as number one of the Worldwide Top 5 Malware. Aside from USB drives (thumb drives to some) earning the superlative for Most Popular, AutoRun malware stood its ground, claiming two of the top five spots.
So what does this mean for your company? Sophos’ Graham Cluley notes that “more organizations are looking to control access to USB ports.” Whether you’re protecting against incoming attack or outgoing sensitive information, removable storage should always be used with caution.
SearchCIO.com‘s guide to desktop virtualization for CIOs, recently linked to here, reminded me of how many IT managers I see struggling with desktop management. I often see people in the trenches struggling with everything that desktop management can throw at you: malware infections, inconsistent configurations, missing patches, workstation backups … you name it. It made me wonder why more shops aren’t relying on desktop virtualization. Is it a lack of budget? A misunderstanding of the long-term administrative benefits? A lack of security buy-in? What’s the deal?
I know desktop virtualization solutions like XenDesktop and VMWare View are not a magic solution to all our IT problems. However, managing desktops the old-fashioned way seems a bit out of line and behind the times here in 2010. Not using available technology goes against what the high-paid business consultants with their MBAs tell management they need to fix in order to cut costs and streamline their businesses. Why is this being overlooked in so many situations?
Maybe it’s out there and I’m just not seeing it. I’m curious to hear what you’re seeing and hearing – especially as it relates to businesses that aren’t using it. Is management not on board for some reason? Are they not listening at all? Is it just too expensive? Given the complexities of today’s environments, there’s got to be a better way.
Kevin Beaver is an independent information security consultant, keynote speaker, and expert witness withPrinciple Logic, LLC and a contributor to the IT Watch Blog.
A few years ago, a sighting of VMware and Google holding hands at a quaint bistro or being caught snogging on a tropical beach might have sent shockwaves through the IT paparazzi. Even Steve Herrod, VMware’s Chief Technology Officer, said the two tech giants were a bit of a mismatch:
When we first met last year, both sides seemed a little unsure… at the time we had fairly different product focuses, customer sets, and cultures.
Google was the bad boy of the consumer revolution, of life made simpler, the David that slayed Microsoft’s giant while breaking all the rules. VMware was a quieter revolutionary; the quiet, nerdy prodigy that helped organize away abstract concepts efficiently, whose biggest fans were in the IT sector that Google was pushing towards irrelevency.
Call it fate, or Cupid’s arrow, or the fact that they shared the same college dorm at Stanford, the Gates Computer Science Building.
Whatever the cause, Steve Herrod was publicly crowing about the two tech giants’ “Open PaaS” strategy, and he offered a little more insight in his Q&A with Alex Barrett, news director of SearchCloudComputing: Continued »
Looking to beef up your virtualization knowledge, or just boot up some virtual desktops? We’re compiling the top resources you need to stay at the top of your game.
The Best Guides on Virtualization Strategies
Looking for a comprehensive primer on server, storage, network or desktop virtualization? You’ve come to the right place. Click one for help on your chosen path:
- Storage Virtualization Guide
- Server virtualization beginner’s guide
- A Basic Virtualized Network (Chapter Download)
- CIO Briefing for Desktop Virtualization Strategies
- The best free virtualization tools guide
- Search Server Virtualization: Search Server Virtualization editors outline how industry changes and announcements affect how your company uses virtualization.
- Virtualization Pro: SearchVMWare editors provide the grittier bits of the business side along with resources you need to check out.
- Irregular Expressions: User Dan O’Connor reveals vulnerabilities and exploits while sharing virtualization tips and tricks.
Looking for something a little more real-time? Then look no further: We’ve also compiled our favorite Virtualization Twitter accounts, including luminaries like Brian Madden and the lovable @VMWareCares.
Microsoft’s had a busy week so far, and it’s only Tuesday. A settlement was reached yesterday in the VirnetX patent suit, which began in February 2007.
The Facts: Infringed Patents
1. U.S. Patent No. 6,502,135: for “a method of transparently creating a VPN between client computer and target computer.” The jury granted VirnetX $71.75M.
2. U.S. Patent No. 7,188,180: for a “method for establishing a VPN using a secure domain name service.”The jury granted VirnetX $34M.
Microsoft has maintained that neither of these patents was infringed and further, that the patents themselves are invalid. Soon after the ruling was declared, VirnetX filed another suit against Microsoft citing the same patents were infringed, but by Windows 7 and Windows Server 2008 R2—released after the initial suit was drawn. (The original trial was aimed at Windows XP, Vista, and Microsoft’s Live Communication Server and Office Communication Server.) The settlement includes Microsoft’s licensing of the VirnetX technology, originally developed for the CIA, placing them in the company of major names that also use VirnetX such as Google, HP, AT&T and more.
VirnetX isn’t alone, either. Microsoft has been ordered to dish out a total of $490M, for VirnetX and another patent lawsuit—this one dealing with Word and Excel’s XML technology—from Toronto-based i4i. This one seems a little more personal; i4i’s chairman Louden Owen has claimed their win as “a war cry for talented inventors whose patents are infringed.” Though Microsoft continues to appeal the decision—and continues to be rejected—future versions of Word and Excel will be lacking the custom XML-based capabilities. Versions of Word distributed prior to January 11, 2010 are not affected; all other versions will have their custom XML tags removed. Microsoft’s Gray Knowlton suggests using Content Controls to reimplement any of your solutions that use Custom XML Tags.
Is this just another case of patent trolls—or more articulately, non-practicing entities—filing patents for future lawsuits such as these? It would appear not, depending on your opinion of the PTO, as Microsoft’s repeated attempts to discredit the patents they’re accused of violating have been thwarted by the U.S. patent office itself. From CampusTechnology: On Tuesday [May 11, 2010], i4i announced that the United States Patent and Trademark Office (PTO) had validated i4i’s patent on certain “custom XML” technology. This XML-based technology creates a metacode map to manipulate a document’s structure without reference to content. It allows a document such as a Word file to be converted to XML without loss of content.
Microsoft remains firm in its stance that the so-called custom XML technology is obscure and therefore fair game. Nevertheless, all versions of Word released after January 11, 2010 will strip documents of their custom XML upon opening (a build-upon of the patch for Word 2007 released earlier this year). Never fear: in April, i4i outlined how their x4w software will swoop in and save your custom XML tags. They’re hardly the heroes, however, adding an additional step and software to purchase for the end user.
Virtualization has many obvious benefits, but the one that stands out to me is being able to use it for security vulnerability testing. One of the things that has frustrated me the most over the years is how security testing tools will junk up your system – especially Windows. Install enough vulnerability scanners, network analyzers and so on over time and you’ll undoubtedly be cussing like a sailor when system slow downs, instability and blue screens of death creep into your work. Oh, not to mention the ever-frustrating situation whereby your anti-virus software “cleans” your vulnerability testing tools right off of your system!
The neat thing with virtualization software such as VMWare Workstation, VirtualBox and the lesser-known Windows XP Mode in Windows 7 (that I recently wrote about for SearchEnterpriseDesktop.com) is that you can create a virtual security testing environment to muck up as much as you’d like without having to worry about affecting your day-to-day productivity by creating problems on your local system. When problems do arise in your virtual environment, you can simply fall back to an older image that works, or just quickly recreate a new one.
I know it seems like it’d be easier to have a dedicated computer to run your security tests from. However, doing everything on one system increases efficiency when you need to be mobile and share files between the host and virtualized systems. Plus, it gives you an excuse to invest in a high-end laptop that you may not be able to justify otherwise.
Kevin Beaver is an independent information security consultant, keynote speaker, and expert witness with Principle Logic, LLC and a contributor to the IT Watch Blog.
Today’s guest post is from Graeme Elliott, a Sydney-based Storage Architect for a large financial firm and leader of the Sydney Tivoli Storage Users Group. Elliott will be starting his own blog on IT Knowledge Exchange shortly, to be titled The Art of Storage.
Virtual Storage is the “art” of moving the “smarts” (Mirroring, Snapshots, Replication etc.) from a standard storage array’s controller and placing them into an appliance. This appliance is placed in the data path between the Host and the Storage Array. Even though these virtual appliances are in path there is generally only a minimal performance hit and in some cases, a performance boost due to the appliance’s caching algorithms.
The storage from the backend storage array is now presented to the virtual appliance just like any other host. The virtual appliance can further carve up or merge this storage as desired into “virtual disks” that can be allocated to your hosts. From the Hosts’ perspective, these virtual disks are just like any other disk.
The advantages of doing this are tremendous.
- All that you need to worry about when acquiring storage arrays is storage and performance (no smarts required)
- Span “virtual disks” across multiple arrays or Raid Groups
- Only need to the use the host drivers for the Virtual Appliance, not for the specific storage array vendor
- Storage from arrays can be placed in pools on the Virtual Appliance allowing for a tiered approach to allocations
Another major benefit of these “virtual appliances” -and one that in my experience usually drives the initial purchase-is the data migration features offered. Being able to migrate data between storage arrays while the host is online is highly beneficial in most organizations when it comes time to lifecycle a storage array.
In my “ideal” storage environment, I would have a virtual storage appliance between all hosts and their storage. This provides a consistent and fast method for storage administrators to provision storage while also providing a consistent presentation of this storage to the hosts no matter what backend storage vendor or storage array model is used.
There are always downsides for technologies like this and virtual storage is no different. Troubleshooting performance is more complex as host data can now reside across multiple backend storage array Raid Groups and LUNs, spread across multiple storage arrays or be shared with other hosts on a backend LUN.