Bit9’s come out with their list of 2008’s Popular Applications with Critical Vulnerabilities. Top of the list? Mozilla Firefox. Is IE even on the list? Nope.
This caught my attention because I’m just about to FedEx back my old hard drive so it can be wiped clean and reformatted. You see, I was hit by the Backdoor.Tidserv Trojan and it shut me down in about ten minutes.
As soon as my Symantec warning came up, I shut down and rebooted in safe mode. The tricky little Trojan told me (usuing its Symantec voice) that I had successfully cleaned up, but it was lying. When I started back up, I was in pop-up city. Another shut-down and scan told my computer that a Trojan was found and quarantined. Again, it was lying. What the Trojan was doing was shutting me down a little bit more each time I ran a virus scan. After about five scans, I was toast.
My 2009 resolution? Remove Firefox.
|Lewis Carroll would’ve had a field day satirizing the re-emergence of WAFS (Wide Area File Services), a storage industry acronym with as many meanings as there are vendors offering products. Chase this particular white rabbit down its hole, and Alice the IT manager could embark on a journey at least as bizarre as her namesake’s trip to Wonderland.|
I’ve been reading about WAN optimization this morning and am shaking my head at how vendors seem to be alienating would-be customers by drowning them in proprietary lingo. Dennis Drogset (Network World) did a great job breaking down the issues so I could understand them.
The issues have been that while WAN optimization products deliver real value on a link-by-link basis, they obscure end-to-end visibility so that strategic planners are left to guess at the real “before” and “after.” They are also typically housed as an appliance with, up until very recently, little interest in integration with the rest of the management community. Finally, as “cheese-stand-alone” solutions, they become costly when scaled to large enterprise environments with hundreds and sometimes thousands of remote locations.
Ok. I get it now. WAN optimization vendors are scrambling because nobody in their right mind today wants to buy into an expensive cheese-stands-alone solution.
|In a typical setup, a WAN accelerator is placed at each end of a WAN link. The appliance sits on the LAN side, on the clear-text side of a VPN device and behind the firewall or Internet router, and intercepts all traffic. The traffic is compressed, sent across the WAN, decompressed by the remote accelerator and then forwarded to the destination.
Mike DeMaria, Breaking the WAN Bottleneck
Peter Sevcik also does a really good job explaining how WAN accelerators work and the teaches us the difference between “transparent addressing” (term used by Cisco) and “correct addressing” (term used by Riverbed).
There are two approaches used as seen by the routers between the accelerators. Transparent addressing shows the original client-server source and destination addresses and hides the addresses of the accelerators. Correct addressing shows the addresses of the accelerators and hides the addresses of client-server. Both approaches work. Both approaches have their pros and cons…
In a transparent addressing architecture, the monitoring tools will continue to show network usage by the original client-server addresses and port numbers. But you will not directly see the total traffic carried between the accelerators. Remember that they are masked.
With a correct addressing architecture, traffic monitoring tools will show traffic as having come from/to the acceleration appliances. In this case the client-server traffic volume is masked. However, in both cases the traffic volumes that are reported by routers or probes between the accelerators will be dramatically changed from the original true traffic as seen on the LANs.
In both cases the best solution is to move the network monitoring probes to the LAN-side of the appliance or to gather usage information directly from the appliance itself. Ongoing real-time before-and-after picture of what the accelerators is doing (like how much compression is being achieved) can only be supplied by the accelerators. So we recommend shifting traffic monitoring to the appliance. That way you get same accurate data from either approach.
|Remember the word “spintronics” as you may be hearing more and more about it over the coming months. It’s basically a phenomenon that creates magnetic currents that behave much in the same way that electric currents work, except with out all the heat that electric currents generate.|
Lots of buzz about spintronics since Eiji Saitoh of Keio University in Yokohama, Japan, published an article in Nature about a phenomenon called the spin Seebrook effect. Potentially, spintronic devices would store information magnetically and use magnetism for battery power. (Magnets don’t have waste heat. If scientists can reduce waste heat, it could also help with computer chip miniaturization, lower power consumption and improve speed.)
Here’s a fairly easy-to-understand explanation of spintronics from Nantotechnology Now:
All spintronic devices act according to the simple scheme: (1) information is stored (written) into spins as a particular spin orientation (up or down), (2) the spins, being attached to mobile electrons, carry the information along a wire, and (3) the information is read at a terminal. pin orientation of conduction electrons survives for a relatively long time (nanoseconds, compared to tens of femtoseconds during which electron momentum decays), which makes spintronic devices particularly attractive for memory storage and magnetic sensors applications, and, potentially for quantum computing where electron spin would represent a bit (called qubit) of information.
Magnetoelectronics, Spin Electronics, and Spintronics are different names for the same thing: the use of electrons’ spins (not just their electrical charge) in information circuits.
|“Forget about aliens, let’s cure AIDS.”
Stanley Litow, quoting a commenter after the launch of the World Community Grid
I’m proud to help spread the news that IBM is backing a distributed grid supercomputer called the World Community Grid. As I write this, over 413,000 members volunteering 1.2 million computers are harnessing their idle computing power to help scientists working on humanitarian causes. The really interesting part is that this initiative will create kind of a hybrid supercomputer and once again change the definition of “the cloud.” (IBM piloted the program on their internal cloud and then extended out the grid to individual computer users.)
To become of member of World Community Grid and donate your idle processing power so scientists can find a cure for AIDS, develop more efficient solar panels or help humanity in some other useful way, all you have to do is sign up www.worldcommunitygrid.org. You’ll be asked to install a small software which will allow your computer to request work from the World Community Grid’s server. After the work has been completed, your computer will send the results back to the WCG server and ask it for a new piece of work. A screen saver will tell you when your computer is busy being a supercomputer.
…The World Community Grid is running at an average of 179 Teraflops, roughly equivalent to the 11th most powerful supercomputer on earth. (The current heavyweight, IBM’s Roadrunner, runs at more than 1 Petaflop or 1,000 trillion calculations per second.)
The quote above comes from the article IBM and Harvard Tap World Community Grid
by David Gelles. Litow, IBM VP for corporate citizenship and affairs, was referring to another grid computing initiative called SETI@home. SETI is an abbreviation for “search for extra-terrestrial intelligence.”
|In the long run, the IT department is unlikely to survive, at least in its familiar form. It will have little left to do once the bulk of business computing shifts out of private data centers and into the cloud.
Nicholas Carr, Why IT Will Change
The corporate IT department has had a dual nature until now. One really important function has been the kind of technical expertise that keeps the computing machines running.
Over the next five or 10 years, the technical aspect of the IT department will become less important. It will slowly evaporate as more of those experts go outside onto the grid. But the information management and information strategy elements will become, if anything, more important. The ways companies take advantage of digitized information will become more important, not less.
The big question in the long run is, do those types of skills—information management and thinking—remain in a separate IT department or do they naturally flow into business units and other traditional parts of the business? My guess is that over time, they’ll begin to flow into the business itself and that will be accelerated as individual workers and business units get more control over the way they are able to organize and manipulate their own information. I would be surprised if maybe 20 years from now there are still IT departments in corporations.That doesn’t mean that the skills in those departments are going away. The more technical skills will probably move out into the supplier community and the strategic thinking, or tactical thinking about information, will flow out into the business itself.
|This generation of web services got their start from LAMP – a stack of simple, yet powerful technologies that to this day is behind a lot of popular web sites. The beauty of LAMP is in its simplicity; it makes it very easy to get a prototype out the door. The problem with LAMP is in its scalability.
Alex Iskold, Reaching for the Sky Through The Compute Clouds
The first scalability issue is fairly minor – threads and socket connections of the Apache web server. When load increases and configuration is not tuned properly you might run into problems. But the second problem with LAMP is far more significant: the MySQL relational database is the ultimate bottleneck of the system.
Lately I’ve been reading about the future of the LAMP stack, which I always thought of as the poster child for Web 2.0. Alex got me wondering about the future of LAMP now that everything is cloud-colored. Will MySQL be the bottleneck? But then I read this article about Sun Microsystems throwing “more chips into its “billion-dollar bet on the LAMP stack” with the recent launch of its MySQL Enterprise 2008” and now I’m not so sure that LAMP is on its way out.
|If you can increase your capacity simply by adding another twenty nodes to your infrastructure (such as with standard clustered LAMP deployments) you should try putting a few nodes in a cloud for a month and see if it works for you.|
|“Cloud computing” is an apt name for a technology that is many things to many people. Although each vendor that enters the space seems to have a different approach to cloud services, all of them face a common challenge: coming up with an achievable service guarantee to reassure hesitant customers.
Erika Morphy, Cloud Computing, Part 3: SLA Spirit in the Sky
Take the tech challenge for service level agreement.
|Cloudbursting is an application hosting model which combines existing corporate infrastructure with new, cloud-based infrastructure to create a powerful, highly scalable application hosting environment.
Jeff Barr, Cloudbursting – Hybrid Application Hosting
I could have come up with some kind of lifeless and forgettable acronym, but that’s not my style. I proposed cloudbursting in a meeting a month or two ago and everyone seemed to like it.
I really like Jeff Barr and usually agree with his observations, but this time I think he missed the boat…er cloud. The term cloudburst doesn’t really describe a hybrid model at all. And it has a negative connotation. And it’s already been used in the blogosphere to describe what happens when your cloud is unavailable.
Remember Microsoft’s Hailstorm? Not a good name either. Still, I can see why Jeff didn’t want to just slap an ordinary acronym on the concept. Hybrid Application Hosting. HAH?
We need to put on our thinking caps and help him out.