[kml_flashembed movie="http://www.youtube.com/v/RJISYEbPF4E" width="425" height="350" wmode="transparent" /]
[kml_flashembed movie="http://www.youtube.com/v/szLmAPW39uE" width="425" height="350" wmode="transparent" /]
[kml_flashembed movie="http://www.youtube.com/v/rDwWXpju77Q" width="425" height="350" wmode="transparent" /]
In case you weren’t one of the readers stampeding these stories, here’s are the most popular tales from across TechTarget’s five Australian sites in 2008:
1. SearchCIO readers could not get enough of this story comparing virtualisation wares from Microsoft and VMWare.
2. Career advice clearly appeals to networking professionals, who stampeded this piece about how certifications can improve your prospects at work.
3. Old-school is still big for security pros, who loved this piece about five command line tools to detect Windows hacks.
4. The blend of open source NAS and virtualisation proved the most popular mix for our storage-oriented readers.
5. Bluetooth for Business was the story of most interest to SearchVoIP ANZ readers in 2008.
|By 2020, the virtual world will have blended with the physical world; to speak of them as separate spheres will seem anachronistic.
Nicholas Carr, as quoted in Pew: 55% of Experts Herald Virtual Worlds and Augmented Reality in 2020
Here’s a link to the newest Pew Internet Life report.
|Companies often use lifecycle management as a last ditch effort, a “Hail Mary” pass…In reality, product lifecycle management is a difficult task. The root of its difficulty lies in its proactive nature.|
Companies naturally take reactive stances to their products as they move through their lifecycles, responding to ebbs and flows in profits, reacting to competition, worrying about the inevitable patent expiration. Product lifecycle management must be a proactive initiative. It must be a key organizational aspect of the company and not a burden, a positive force for profit and for the future of the business. And that’s a difficult transition.
|A Baltimore federal court judge ordered six absent defendants yesterday – including one from Maryland – to shut down Internet businesses that the Federal Trade Commission claims are part of a vast $100 million “scareware” scheme that tricked more than a million people into purchasing useless security software by making them think their computers were under attack.
Tricia Bishop, Court orders ‘scareware’ shut down
The companies allegedly represented themselves falsely as Internet marketers and used legitimate advertising outlets to place malicious advertisements that redirected consumers to the defendants’ Web sites.
There, screens would pop up saying a security scan had revealed harmful or illegal files and urging computer users to purchase software for $40 to fix the phony problems. In that way, the companies were able to bilk people of more than $100 million, according to the FTC.
Bit9’s come out with their list of 2008’s Popular Applications with Critical Vulnerabilities. Top of the list? Mozilla Firefox. Is IE even on the list? Nope.
This caught my attention because I’m just about to FedEx back my old hard drive so it can be wiped clean and reformatted. You see, I was hit by the Backdoor.Tidserv Trojan and it shut me down in about ten minutes.
As soon as my Symantec warning came up, I shut down and rebooted in safe mode. The tricky little Trojan told me (usuing its Symantec voice) that I had successfully cleaned up, but it was lying. When I started back up, I was in pop-up city. Another shut-down and scan told my computer that a Trojan was found and quarantined. Again, it was lying. What the Trojan was doing was shutting me down a little bit more each time I ran a virus scan. After about five scans, I was toast.
My 2009 resolution? Remove Firefox.
|Lewis Carroll would’ve had a field day satirizing the re-emergence of WAFS (Wide Area File Services), a storage industry acronym with as many meanings as there are vendors offering products. Chase this particular white rabbit down its hole, and Alice the IT manager could embark on a journey at least as bizarre as her namesake’s trip to Wonderland.|
I’ve been reading about WAN optimization this morning and am shaking my head at how vendors seem to be alienating would-be customers by drowning them in proprietary lingo. Dennis Drogset (Network World) did a great job breaking down the issues so I could understand them.
The issues have been that while WAN optimization products deliver real value on a link-by-link basis, they obscure end-to-end visibility so that strategic planners are left to guess at the real “before” and “after.” They are also typically housed as an appliance with, up until very recently, little interest in integration with the rest of the management community. Finally, as “cheese-stand-alone” solutions, they become costly when scaled to large enterprise environments with hundreds and sometimes thousands of remote locations.
Ok. I get it now. WAN optimization vendors are scrambling because nobody in their right mind today wants to buy into an expensive cheese-stands-alone solution.
|In a typical setup, a WAN accelerator is placed at each end of a WAN link. The appliance sits on the LAN side, on the clear-text side of a VPN device and behind the firewall or Internet router, and intercepts all traffic. The traffic is compressed, sent across the WAN, decompressed by the remote accelerator and then forwarded to the destination.
Mike DeMaria, Breaking the WAN Bottleneck
Peter Sevcik also does a really good job explaining how WAN accelerators work and the teaches us the difference between “transparent addressing” (term used by Cisco) and “correct addressing” (term used by Riverbed).
There are two approaches used as seen by the routers between the accelerators. Transparent addressing shows the original client-server source and destination addresses and hides the addresses of the accelerators. Correct addressing shows the addresses of the accelerators and hides the addresses of client-server. Both approaches work. Both approaches have their pros and cons…
In a transparent addressing architecture, the monitoring tools will continue to show network usage by the original client-server addresses and port numbers. But you will not directly see the total traffic carried between the accelerators. Remember that they are masked.
With a correct addressing architecture, traffic monitoring tools will show traffic as having come from/to the acceleration appliances. In this case the client-server traffic volume is masked. However, in both cases the traffic volumes that are reported by routers or probes between the accelerators will be dramatically changed from the original true traffic as seen on the LANs.
In both cases the best solution is to move the network monitoring probes to the LAN-side of the appliance or to gather usage information directly from the appliance itself. Ongoing real-time before-and-after picture of what the accelerators is doing (like how much compression is being achieved) can only be supplied by the accelerators. So we recommend shifting traffic monitoring to the appliance. That way you get same accurate data from either approach.