|With the introduction of virtual port technology, the common pooled resources of virtual cells can be partitioned into multiple virtual WLANs, with a unique WLAN for each user device; the dedicated virtual WLAN moves with the user as long as his device is connected to the wireless network.|
As with wired switches, the network has full control over the resources and services allocated to a given device. Because the device is “sandboxed” in its own virtual WLAN, the user has a highly reliable wired-like experience, with full access to appropriate resources yet protected from disruptions by other users’ network demands. When devices are partitioned into their own dedicated virtual WLANs, the network can control client behavior in ways that proprietary client driver extensions and AP radio management technologies cannot – without adding any client software. As with virtual cell technology, virtual port technology is fully based on IEEE 802.11 standards.
|The model we tried to emulate in the project codenamed Blue Business Platform was to think just like the Apple iPod model — the iTunes Store (Smart Market), iPod (Smart Cube), and the iTunes desktop application (Smart Desk).|
|Silicon Valley is chattering about who will get tapped to be the nation’s first “chief technology officer” in the Obama administration. There’s no doubt the job will be a tough one but could offer one surprising perk: a quiet way to cash out of a stock portfolio and invest in, say, Treasury bonds, while significantly deferring any capital gains taxes.
Elizabeth Corcoran, Obama’s CTO: It’s Not About The Money
But the job that the Obama team has in mind seems to be less about setting a lofty vision statement for the government and more about orchestrating tactics to get different agencies to cooperate, share best practices and live up to the goal of creating a “more transparent” government.
|Sure, it’s proven, and a lot of people use it. But like many proprietary technologies, it also has some unappealing characteristics. It demands specialized expertise. It’s not always as fast as advertised. It’s not completely reliable. It certainly doesn’t work and play well with others. Yes, we are talking about InfiniBand.
Dan Tuchler, Incoming: 10 Gigabit Ethernet for HPC
When it comes to reducing capital and operating expenses, one infrastructure is simply better than two — or more — and the HPC environment is no exception. High-performance computing clusters that use an InfiniBand interconnect also use Ethernet. Ethernet is necessary for user and storage connectivity, and for the management network that orchestrates the cluster. Replacing the InfiniBand interconnect with 10GE to create a single, all-inclusive infrastructure will cut hardware and power costs, and simplify manageability. And, that infrastructure combines high performance with low power needs and a sufficiently low latency for many HPC applications, making it an excellent fit for technical and budget requirements.
|For most of human history, people have lived in small tribes where everything they did was known by everyone they knew. In some sense we’re becoming a global village. Privacy may turn out to have become an anomaly.
Dr. Thomas W. Malone, as quoted in You’re Leaving a Digital Trail. What About Privacy?
This New York Times article is about a bunch of kids at MIT who are trading their “privacy” for a free smartphone. The article ends with this quote by Dr. Thomas Malone, the director of the M.I.T. Center for Collective Intelligence. He’s got a good point. I remember stories my grandmother told about kids listening in on her parent’s “party line” or how the small-town operator in our upstate New York town was such a gossip that if you had something confidential to share, you would NEVER use the telephone.
Forrester’s Frank Gillett talks with Beet.tv about “cloudwashing.” He does a nice job breaking down where the future might lie for cloud computing and unlike a lot of other pundits, Frank seems to have his feet on the ground (and not in the clouds). Don’t miss this one.
[kml_flashembed movie="http://www.youtube.com/v/f7wv1i8ubng" width="425" height="350" wmode="transparent" /]
Thanks go to Dennis Shiao for recommending this video clip. My takeaway? I need to keep an eye on three things: software as a service (SaaS), platform as a service (PaaS) and virtual infrastructure as a service (usually shortened to just IaaS).
|By the time astronauts make humanity’s next giant leap, they may well be getting their e-mail via a dot-space address.
Alan Boyle, Interplanetary Internet Passes Test
Today, NASA’s information superhighway to outer space flows through one major gateway – the Deep Space Network – to a host of space probes, scattered all the way out from Earth orbit to the edge of the solar system. As those probes proliferate, the Deep Space Network has to keep up with an increasingly complex communications schedule.
The new protocol developed by NASA to deal with complex communication scheduling is called Disruption-Tolerant Networking (DTN). It’s sort of works like TCP/IP, but it doesn’t assume there will be a continuous end-to-end connection. If a destination path can’t be found, the data packets aren’t discarded. Instead, each network node keeps the information until it can communicate safely with another node. It’s called a store-and-forward system.
|It is clear that without standards of one kind or another (de-facto or from a recognised body), there won’t be a market, and without a market, the cloud is unlikely to thrive. The competition isn’t as much between cloud providers, as it is between cloud providers and internal IT organizations. Cloud providers need to keep that firmly in mind.
Benjamin Ellis, CloudCamp London 2: On Standards. Special Guest Post
A standard image format might provide a base level of standardization, but there is a risk that the industry then gets caught up in a ‘lowest common denominator’ model that throttles much of the unique innovation that the scale and speed of cloud computing allows. There was a consensus for a pragmatic approach: a layering of APIs, standardizing a layer at a time.
(My apologies to Benjamin Ellis! I had originally credited this quote to James Govenor.)
|Breaking the petaflop barrier, a feat that seemed astronomical just two years ago, won’t just allow faster computations. These computers will enable entirely new types of science that couldn’t have been done before.|
The U.S. Department of Energy announced that the XT Jaguar, housed at its Oak Ridge National Laboratory, has hit a peak performance of 1.64 petaflops. That’s more than a quadrillion mathematical calculations per second.
Officially, the computing power will be used for simulation. Simulating climate conditions, for example. Or maybe nuclear explosion modeling.
|Knowing which applications and departments are driving IT expenses is critical now, and will continue to be critical as cloud computing goes mainstream in the enterprise. Therefore, any cloud chargeback solution should integrate with the chargeback framework that the company uses to manage their physical assets.
John Gannon, Enterprise Cloud Computing: Understanding the Costs
James Govenor’s blog post wrapping up the speakers at Cloud Camp London is a must-read for anyone interested in the future of cloud computing. Before we even have the luxury of talking about cloud chargeback, there’s some serious work to do re: standards.