A new wireless standard that outpaces 802.11n in speed by tenfold — but at a much shorter range — was approved by the Wireless Gigabit Alliance this week.
The Multi-Gigabit Wireless Specification or WiGig 1.0 works on the little used 60 GHz band and supports data transmissions of up to 7 Gbps. But the technology creates only a 10-meter range wireless network – much smaller than the 100-meter network enabled by 802.11n Wi-Fi.
The technology can, however, be used with beamforming to automatically switch over to Wi-Fi at the 10-meter border, extending to a 100-meter network with throughput of 600 megabits per second.
WiGig 1.0 will likely be used for home entertainment and will be built into PCs, TVs, cameras and mobile devices. The technology will enable users in the same house to watch multiple streams of video in different rooms. It is possible that the technology will be used to supplement 802.11n to provide video over WLAN in a multitenant environment, which could include universities and enterprise scenarios.
Devices using the standard are expected to be available in Q1 of 2010.
After watching this video on how to pick a tubular Kensington lock with a toilet paper roll, I had some serious doubts about my laptop’s safety:
[kml_flashembed movie="http://www.youtube.com/v/as-CPdf-rKI" width="425" height="350" wmode="transparent" /]
Had I watched this video when I forgot the keys to my Kensington MicroSaver Alarmed Computer Lock — generously given by CableOrganizer.com for review — I might have saved myself from disassembling an office desk with a screwdriver. Let me explain…
The laptop locking gaffe
Being a remote worker left little opportunity for me to truly test the Kensington MicroSaver Alarmed Computer Lock. Sure, I could see how my laptop looked chained to my home office desk, but that wouldn’t have made for a very exciting blog post.
Shortly after I received the lock in the mail, did I coincidentally take a trip from the U.S. to the U.K. to put it to the test. Packing proved that the wire of the Kensington lock is not very flexible; you have to really work at coiling it up to fit it in small spaces. For those traveling, I’d recommend either not fitting it in small places or quickly wrapping it up with a few twist ties before it springs back on you.
When I set foot in the U.K. office, I was given the go-ahead to occupy the desk of a man who was absent. I started a file transfer over my corporate VPN towards the end of that day, in what seemed like minutes before my ride back to the hotel rushed into the room demanding I leave now.
“I can’t go,” I said.
They scratched their head.
“My progress bar says there are two more hours left for this file to transfer, and I need this downloaded before tomorrow morning,” I explained.
They clearly weren’t going to wait for me.
“Could I leave my laptop here?” I pleaded.
That was fine with them, and just as well for me since it gave me the chance to legitimately test my Kensington lock for review. I threaded the lock between a leg panel and table top — half-hoping the late-working employees or maintenance men would try to take it over night. Would anyone tamper with it? Would I hear the alarm sounding outside the brick walls of the building the next morning?
When I arrived the next day, it’s what I didn’t find that started the panic. I walked into the office to find the desk already occupied. The man (who didn’t know I was borrowing his space) was already working at his desk next to my laptop, which was locked to his table. Imagine my face when I obligingly went to his desk to remove my laptop only to realize I had left the keys miles away at my hotel!
After some debate, and nervous laughter on my part, one employee said we should take the desk apart. It was a three-man operation: One person did the unscrewing; another held the desk panel as it fell; another held the table top so that it, too, wouldn’t fall. Once the bolts were unscrewed, the man under the desk un-looped the lock from its offending location, and back together the desk went.
Attempting to pick a Kensington lock
It wasn’t until I returned to the states weeks later that I discovered the video. I had to try hacking it of course, but the lock wouldn’t open. I blamed it on having an inferior toilet paper roll (if there is such a thing), or my lack of utility tape. I first tried with scotch tape, then started over using duck tape. Every time I attempted to pick the Kensington lock, I ended up with a mutilated piece of cardboard (shown right).
Maybe the Kensington MicroSaver Alarmed Computer Lock has better security than the one in the YouTube video — or maybe I lack the lock-picking finesse of the demonstrator. Either way, it brought me to a broader conclusion of laptop security.
Secure laptops like you would your network
A security expert once told me there’s no such thing as perfectly secure data. If you wanted zero risk of data being stolen, you would have to keep your data off networks altogether. Network security offers prevention and protection methods — but they won’t be 100% safe.
That’s why security needs to come in layers. Enterprise network security expert Michael Gregg explains the concept of network security defense in depth in this expert response. Just as your network can’t have only a firewall, or only anti-virus, your laptops need defense in depth to slow down corporate crackers. Password protect laptops; add laptop tracking software; figure out stolen laptop recovery if it ever gets that far.
CableOrganizer.com talked about seven ways to prevent computer and data theft by using all of these physical computer security products:
- a USB port block
- a laptop lock
- a USB fingerprint reader
- a notebook privacy filter
- an anti theft PC security stand
- a laptop lockbox
- a CPU security cabinet
While not each and every security product is likely needed for every laptop — the idea of securing in layers is essential to any network, laptop or mobile device.
IBM will step back into networking in a big way in 2010 by buying Juniper Networks, according IDC. The New York Times “Bits” blog says IDC will unleash some of its year-end predictions for 2010 today. One of its bolder predictions appears to be the IBM-Juniper hookup.
Bits quotes IDC’s chief analyst Frank Gens:
Networking, Mr. Gens says, is increasingly part of the package of capabilities the largest technology companies must offer corporate clients. He points to Hewlett-Packard’s recent purchase of 3Com and Cisco’s partnership with EMC as evidence of the trend.
“If you are going to be in the hardware systems business,” Mr. Gens says, “you need network competence.”
This year IBM has stepped up its networking business, first with an announcement in April of a broad OEM agreement to sell IBM-branded Brocade Ethernet products. Then a few months later IBM announced an expansion of that deal with Brocade and added Juniper and Cisco switches to its OEM offerings.
In IDC’s prediction document (which you can download for free), IDC admits that the IBM-Juniper prediction is, in basketball terms a “3-point shot.” But IDC says this prediction is driven by the “growing importance of in the IT world – especially with the emergence of cloud computing and the explosion of mobile devices” which are driving the convergence and integration of the network with computing and storage systems.
A purchase of Juniper seems like a logical step for IBM, if it wants to buy its way back into the networking business whole-hog. Although Juniper is probably best known as a service provider equipment vendor, it has made big strides with its enterprise Ethernet switching and data center networking business over the last year or so. IBM would certainly see the Juniper acquisition primarily as an opportunity to add data center networking into its overall product portfolio.
A IBM-Juniper merger would open the door to a huge three-way data center war among IBM, Cisco and HP. All three would offer soup-to-nuts technology for the data center. Buyers of networking gear would suddenly have three monstrous companies to choose from and companies like Brocade, Force10 and Extreme would be bigger underdogs than ever before.
On the blog 24/7 Wall Street, Jon Ogg boldly predicted this week that Motorola is one of 10 brands that will disappear in 2010. It’s time to break up the company and “scuttle a brand with a bad reputation,” he wrote. A bad reputation among whom? Enterprises? I don’t think so. Brocade and Extreme Networks both recently announced strategic OEM agreements with Motorola’s wireless LAN business. They seem to think the Motorola brand is just fine.
Just before the economy took a dive, Motorola announced vague plans for a corporate breakup. The company would spin out or sell off its struggling mobile handset division so that its networking businesses could thrive. Now it appears that success with smartphones built on Google’s Android OS (the Cliq and the Droid) has Motorola’s leadership more bullish about the handset division. The scuttlebutt now has Motorola selling off its set-top box and network equipment divisions and holding onto the handset division.
Will any of this happen? Hard to say. Plenty of big technology companies (Cisco, HP, Dell) have been in a buying frame of mind in recent months. But one thing is clear: I haven’t seen a single Droid advertisement that informs consumers that the hot new iPhone alternative is a Motorola product. If Motorola is planning to dump its infrastructure business and focus on handsets, why isn’t it associating its brand with Droid?
Meanwhile Motorola’s brand remains strong among enterprises (and telecoms). Motorola’s wireless LAN business is a top-five market leader (although it battles over scraps with companies not named Cisco and Aruba). Its enterprise mobility business (Good Technology) is a well-known brand. And Motorola still has a good reputation among public safety agencies, shipping and transportation companies and football coaches for its two-way radios and its radio dispatch systems.
I think the Motorola brand will survive 2010 just fine. The question is, which part of the company will hold onto it?
CableOrganizer.com asked me to review a few products as part of a data theft prevention promotion they were advertising. One such appliance was a USB port block.
You may be wondering why you would want to block ports on your laptop, and if that’s the case, then here is why network administrators call it the “evil USB port.”
The answer simply is that USB storage devices pose a network security threat to corporate data. MCSE Brien Posey wrote in his article on stopping USB storage devices, that unblocked USB ports could be where an aloof network user places an infected USB device, or where a disobedient employee could easily offload unlicensed software or programs. Unblocked USB ports could also lead to corporate espionage. Imagine a disgruntled employee slipping in a thumb drive and downloading several MBs of sensitive enterprise data in a flash. (By the way, you can check out this guide on network user management to learn about managing problem network users).
While Posey mentions one way to physically block your port drives is to pump it full of epoxy, that also disables ports — even for valid means. He writes:
One of the biggest arguments against plugging up a computer’s USB ports with epoxy is that doing so usually voids the system’s warranty. I have also heard unconfirmed stories of technicians turning on a PC before the epoxy is completely dry and causing damage to the system board as a result.
In preventing USB device use with Windows Vista group policy, Posey talks about how you can disable ports through the systems BIOS, or through Windows Vista and Windows Server 2008 policy settings. However, if you don’t use Windows Server 2008 or Vista, you may be stuck. What are your other options? A port block perhaps.
A USB port block is tiny plate of metal (or some alloy of metals) that covers ports. I’ve shown a larger-than-life picture of one to the right here. Rather than glue your port, you can simply cover it while allowing authorized USB devices when you need it. It sure beats epoxy, and they’re relatively cheap. CableOrganizer.com’s USB Port Block was priced at $4.97, and I saw one from Katerno priced at $3.49. Of course, compared to a $20.00 bottle of epoxy, glue would cover more ports at a smaller cost — but the end result could be far more expensive if the computer malfunctions.
The downside of using a USB port block is that you need to have a tool called a USB lock to fasten it to your computer. You’d also have to go to each port of each computer in your office if you wanted to block ports across your enterprise. However, for CEOs or employees who travel often, or more than most, it may be a way to heighten their laptop security in a pinch. Very little technical experience is needed to place one in your port. You could even get a remote employee to install it themselves provided they had a USB lock.
Government agencies aren’t likely to move their core data to the cloud just yet partially because they can’t be sure their data won’t be moved to servers across state lines where different regulations could apply to how it can be accessed or stored. If cloud service providers want to lure government agencies, they’ll have to provide SLAs that ensure data will be held to specific zones. In this 3-minute video, New York State Deputy CIO Rick Singleton talks about the regulatory challenges posed by the public cloud (similar to problems the health insurance industry has with the cloud), and why there is not enough interoperability between private and pubic clouds.
[kml_flashembed movie="http://www.youtube.com/v/ics4CLgl87U" width="425" height="350" wmode="transparent" /]
Health insurance and financial services industry executives say they’re not ready to trust their core data to the cloud. Executive say strict HIPAA and Sarbanes-Oxley Act regulations and security concerns make it impossible to trust much more than a few applications and back-up systems to the cloud. In this 3-minute video, taken at Interop NY last week, John Merchant, assistant vice president at The Hartford Financial Services Group, discusses the cloud computing challenges posed by regulatory and security requirements.
[kml_flashembed movie="http://www.youtube.com/v/ADpLR90jCmQ" width="425" height="350" wmode="transparent" /]
BusinessWeek asked a question a few days ago that I asked last June. Is Cisco stretching itself too thin? I can’t pretend to be expert enough to answer that question, but chasing 30 new technology markets at once is quite ambitious. Making multiple multi-billion dollar acquisitions of Tandberg and Starent to solidify its position in some of those markets is even more ambitious.
Cisco’s leap into the server market seems to have some investors rattled. The profit margins on servers are so much lower than on some of Cisco’s core markets (switches and routers). As BusinessWeek quoted one investor who questioned Cisco CEO John Chambers at Cisco’s annual meeting earlier this month: “At what size does Cisco become so big and diverse that its growth and profitability will plateau?” Chambers’ answer: hopefully after he retires.
Analysts and investors are wringing their hands over whether Cisco can remain nimble as it expands into new markets and burns its longstanding partnerships with server vendors like HP, Dell and IBM. BusinessWeek points out that HP’s aggressive expansion into the networking market is in part a response to Cisco’s moves in the server market. However, among the comments on the BusinessWeek story, someone named “CS” disagreed that Cisco fired the first shot. “HP has been (unsuccessfully) targeting Cisco’s core market for years with ProCurve. Was Chambers expected to sit idle while one of his largest partners openly attempts to undermine him?”
I’m not quite convinced that ProCurve has been targeting Cisco’s “core” market for years. ProCurve greatest success has been in selling edge switches to the midsized enterprise market. Does that sound like Cisco’s core market? Prior to acquiring 3Com, did ProCurve have any core routers on the market? Did it have any switches that could creditably compete against the Catalyst 6500 or any of the new Nexus switches?
So who started this food fight? Once the fight has begun, does it really matter? No. It only matters who wins or loses. Arguing over whether it was Chambers or HP CEO Mark Hurd who tossed the first plate seems like idle gossip.
Right now the winner looks to be enterprise customers. As Cisco expands and innovates, data center buyers have a new high-end server vendor to consider. And as HP integrates 3Com and H3C into its existing ProCurve division, enterprises networking buyers will find they have a truly viable alternative to Cisco to consider. Choice is always a good thing. And increased competition between vendors doesn’t hurt, either.
Move over data center, networking has regained its rightful place here at Interop NY this week.
Interop was once the networking show, but that ended years ago as the conference tried to become the virtualization show and the data center show to remain relevant.
That shift was reflective of an IT industry that seemed to place all of its emphasis on the data center with little on the apparently irrelevant network. That has clearly changed.
Ironically it is the data center that brought back networking relevance. Users at Interop are trying desperately to understand how to morph their networks to provide manageability and visibility of virtualized environments and deliver dynamic applications and data in a service-provider-style environment.
Networking track moderator Jim Metzler put it best Wednesday during the session “Breakthrough Network Technologies” when he said, “It wasn’t that long ago I thought networking was pretty staid. Not a lot was happening … Technology had gone from frame to ATM to MPLS … and there was no post-MPLS.”
Now talk has turned to implementing network automation in order to apportion dynamic compute resources and applications on demand. Users are asking for better visibility and management tools that work across physical and virtual networks.
Network security is a changed topic at Interop, with conversation focusing on application-specific strategies and the ability to monitor and prevent attacks across private and public networks in the cloud.
SLA is another buzzword as attendees are grappling with requests from their enterprises to ensure application stability and service provider-style services.
In most of these areas – automation, security, SLAs, it appears there are few solutions that satisfy networking teams. Automation is not broad enough and often doesn’t work in multivendor environments. It appears that networking teams are not ready to provide real internal SLAs and that service providers – Amazon, Google and Microsoft included – are unable to offer SLAs that satisfy. Security is ever evolving, but tools are far from able to offer the reporting and analysis networking pros need to entrust their data even to a hybrid cloud model.
Regardless of the unresolved issues, it is at least clear that the network is, and will continue to be, the lifeline of this emerging matrix of virtualized environments delivering dynamic data and applications. Now it is time for the network to meet the challenge.
Guess what networking teams? Consider yourselves service providers. At least that seems to be the message here at Interop New York.
This morning Citrix CEO Mark Templeton and Cisco VP Marie Hattar keynoted the conference, both highlighting consumerization of enterprise IT and its influence on worker expectation on applications and services.
“Our experience when we go home is a better experience than we have [at work],” Templeton said. “Consumerization will force more IT change in the next 10 years than any other trend.”
As enterprise users expect the same type of applications as consumers, enterprises will move to a cloud computing model (likely a hybrid of public and private) in which applications and services will be delivered to any user on any device in a completely secure manner, both Templeton and Hattar said.
Templeton explained the shift as the next phase in the IT evolution, first from mainframe computing to distributed client-server architecture and now to the cloud. This latest shift will “eliminate [some of] the distributed elements” by implementing virtualization of servers, desktops, applications and networks, Templeton said.
In that move, the data center will become known as “a delivery center,” in which the service is controlled, but not the device he said.
The heavy cloud focus here at Interop is also leading to mass discussion about a move to the flat network in which switching layers (access, aggregation/distribution and core) are collapsed, enabling enterprises to use access switches to connect into the core, wiping out the middle level.
A number of users here at the show were quick to point out the many problems with flattening the network and broadening Layer 2, including running out of IP addresses, and a lack of automation and management techniques.