|In a typical setup, a WAN accelerator is placed at each end of a WAN link. The appliance sits on the LAN side, on the clear-text side of a VPN device and behind the firewall or Internet router, and intercepts all traffic. The traffic is compressed, sent across the WAN, decompressed by the remote accelerator and then forwarded to the destination.
Mike DeMaria, Breaking the WAN Bottleneck
Peter Sevcik also does a really good job explaining how WAN accelerators work and the teaches us the difference between “transparent addressing” (term used by Cisco) and “correct addressing” (term used by Riverbed).
There are two approaches used as seen by the routers between the accelerators. Transparent addressing shows the original client-server source and destination addresses and hides the addresses of the accelerators. Correct addressing shows the addresses of the accelerators and hides the addresses of client-server. Both approaches work. Both approaches have their pros and cons…
In a transparent addressing architecture, the monitoring tools will continue to show network usage by the original client-server addresses and port numbers. But you will not directly see the total traffic carried between the accelerators. Remember that they are masked.
With a correct addressing architecture, traffic monitoring tools will show traffic as having come from/to the acceleration appliances. In this case the client-server traffic volume is masked. However, in both cases the traffic volumes that are reported by routers or probes between the accelerators will be dramatically changed from the original true traffic as seen on the LANs.
In both cases the best solution is to move the network monitoring probes to the LAN-side of the appliance or to gather usage information directly from the appliance itself. Ongoing real-time before-and-after picture of what the accelerators is doing (like how much compression is being achieved) can only be supplied by the accelerators. So we recommend shifting traffic monitoring to the appliance. That way you get same accurate data from either approach.
|Remember the word “spintronics” as you may be hearing more and more about it over the coming months. It’s basically a phenomenon that creates magnetic currents that behave much in the same way that electric currents work, except with out all the heat that electric currents generate.|
Lots of buzz about spintronics since Eiji Saitoh of Keio University in Yokohama, Japan, published an article in Nature about a phenomenon called the spin Seebrook effect. Potentially, spintronic devices would store information magnetically and use magnetism for battery power. (Magnets don’t have waste heat. If scientists can reduce waste heat, it could also help with computer chip miniaturization, lower power consumption and improve speed.)
Here’s a fairly easy-to-understand explanation of spintronics from Nantotechnology Now:
All spintronic devices act according to the simple scheme: (1) information is stored (written) into spins as a particular spin orientation (up or down), (2) the spins, being attached to mobile electrons, carry the information along a wire, and (3) the information is read at a terminal. pin orientation of conduction electrons survives for a relatively long time (nanoseconds, compared to tens of femtoseconds during which electron momentum decays), which makes spintronic devices particularly attractive for memory storage and magnetic sensors applications, and, potentially for quantum computing where electron spin would represent a bit (called qubit) of information.
Magnetoelectronics, Spin Electronics, and Spintronics are different names for the same thing: the use of electrons’ spins (not just their electrical charge) in information circuits.
|“Forget about aliens, let’s cure AIDS.”
Stanley Litow, quoting a commenter after the launch of the World Community Grid
I’m proud to help spread the news that IBM is backing a distributed grid supercomputer called the World Community Grid. As I write this, over 413,000 members volunteering 1.2 million computers are harnessing their idle computing power to help scientists working on humanitarian causes. The really interesting part is that this initiative will create kind of a hybrid supercomputer and once again change the definition of “the cloud.” (IBM piloted the program on their internal cloud and then extended out the grid to individual computer users.)
To become of member of World Community Grid and donate your idle processing power so scientists can find a cure for AIDS, develop more efficient solar panels or help humanity in some other useful way, all you have to do is sign up www.worldcommunitygrid.org. You’ll be asked to install a small software which will allow your computer to request work from the World Community Grid’s server. After the work has been completed, your computer will send the results back to the WCG server and ask it for a new piece of work. A screen saver will tell you when your computer is busy being a supercomputer.
…The World Community Grid is running at an average of 179 Teraflops, roughly equivalent to the 11th most powerful supercomputer on earth. (The current heavyweight, IBM’s Roadrunner, runs at more than 1 Petaflop or 1,000 trillion calculations per second.)
The quote above comes from the article IBM and Harvard Tap World Community Grid
by David Gelles. Litow, IBM VP for corporate citizenship and affairs, was referring to another grid computing initiative called SETI@home. SETI is an abbreviation for “search for extra-terrestrial intelligence.”
|In the long run, the IT department is unlikely to survive, at least in its familiar form. It will have little left to do once the bulk of business computing shifts out of private data centers and into the cloud.
Nicholas Carr, Why IT Will Change
The corporate IT department has had a dual nature until now. One really important function has been the kind of technical expertise that keeps the computing machines running.
Over the next five or 10 years, the technical aspect of the IT department will become less important. It will slowly evaporate as more of those experts go outside onto the grid. But the information management and information strategy elements will become, if anything, more important. The ways companies take advantage of digitized information will become more important, not less.
The big question in the long run is, do those types of skills—information management and thinking—remain in a separate IT department or do they naturally flow into business units and other traditional parts of the business? My guess is that over time, they’ll begin to flow into the business itself and that will be accelerated as individual workers and business units get more control over the way they are able to organize and manipulate their own information. I would be surprised if maybe 20 years from now there are still IT departments in corporations.That doesn’t mean that the skills in those departments are going away. The more technical skills will probably move out into the supplier community and the strategic thinking, or tactical thinking about information, will flow out into the business itself.
|This generation of web services got their start from LAMP – a stack of simple, yet powerful technologies that to this day is behind a lot of popular web sites. The beauty of LAMP is in its simplicity; it makes it very easy to get a prototype out the door. The problem with LAMP is in its scalability.
Alex Iskold, Reaching for the Sky Through The Compute Clouds
The first scalability issue is fairly minor – threads and socket connections of the Apache web server. When load increases and configuration is not tuned properly you might run into problems. But the second problem with LAMP is far more significant: the MySQL relational database is the ultimate bottleneck of the system.
Lately I’ve been reading about the future of the LAMP stack, which I always thought of as the poster child for Web 2.0. Alex got me wondering about the future of LAMP now that everything is cloud-colored. Will MySQL be the bottleneck? But then I read this article about Sun Microsystems throwing “more chips into its “billion-dollar bet on the LAMP stack” with the recent launch of its MySQL Enterprise 2008” and now I’m not so sure that LAMP is on its way out.
|If you can increase your capacity simply by adding another twenty nodes to your infrastructure (such as with standard clustered LAMP deployments) you should try putting a few nodes in a cloud for a month and see if it works for you.|
|“Cloud computing” is an apt name for a technology that is many things to many people. Although each vendor that enters the space seems to have a different approach to cloud services, all of them face a common challenge: coming up with an achievable service guarantee to reassure hesitant customers.
Erika Morphy, Cloud Computing, Part 3: SLA Spirit in the Sky
Take the tech challenge for service level agreement.
|Cloudbursting is an application hosting model which combines existing corporate infrastructure with new, cloud-based infrastructure to create a powerful, highly scalable application hosting environment.
Jeff Barr, Cloudbursting – Hybrid Application Hosting
I could have come up with some kind of lifeless and forgettable acronym, but that’s not my style. I proposed cloudbursting in a meeting a month or two ago and everyone seemed to like it.
I really like Jeff Barr and usually agree with his observations, but this time I think he missed the boat…er cloud. The term cloudburst doesn’t really describe a hybrid model at all. And it has a negative connotation. And it’s already been used in the blogosphere to describe what happens when your cloud is unavailable.
Remember Microsoft’s Hailstorm? Not a good name either. Still, I can see why Jeff didn’t want to just slap an ordinary acronym on the concept. Hybrid Application Hosting. HAH?
We need to put on our thinking caps and help him out.
|Earlier performance-enhancing technologies, such as MPLS, helped support video as one of many applications. Now it’s time to address video as the main application.
Suraj Shetty, as quoted in Cisco, anticipating video tsunami, builds up network smarts
I’m keeping an eye on the Cisco Media Processing platform. The takeaway is that Cisco is taking another step to position themselves as the company that’s going to help network administrators handle video traffic better.
Cisco marketing is pushing the idea of “Medianet.” The idea is that an intelligent network will understand what format to convert the video and then the hardware will transcode the video so it can play on any device, including digital signage (another area Cisco has been positioning themselves as Number 1). Video transcoding converts the content into different formats so it can be viewed on different types of devices. It’s key to managing bandwidth and storage and it’s been a real brick wall for video.
The first product for Medianet is called the Cisco Media Experience Engine 3000, otherwise known as MXE. It’s expensive — $50k — and I’m not quite sure yet who the customer is. Cisco also introduced the Cisco Advanced Video Services Module (AVSM). It’s part of the Cisco ASR 9000 edge router. The literature says AVSM enables “terabytes of streaming capacity at the aggregation edge while simultaneously offering content caching, ad insertion, fast channel change and error correction.”
|We knew that the volume of new attacks and the vectors used were only going to increase, so we chose to stay ahead of the curve with a behavioral analysis system. I believe behavior and anomaly-based solutions will be most effective long term.
Jamie Arnold, as quoted in SUNY’s Binghamton Monitors Network with Lancope’s StealthWatch
I spent part of the morning reading about anomaly-based network monitoring. In October, IBM announced that they would no longer sell the IBM Proventia Network Anomaly Detection System (ADS). Stealthwatch seems to be getting a lot of buzz, especially with college campuses whose biggest threats probably come from right inside the network.