The Network Hub


February 13, 2012  3:57 PM

Behind the scenes at UNH’s InterOperability Lab

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

I recently visited the University of New Hampshire InterOperability Laboratory (UNH-IOL) for a behind-the-scenes tour. This 32,000-square-foot facility is the place where networking vendors large and small go in order to certify that their technologies interoperate with each other. The lab has more than 20 different testing programs which produce independent results via collaboration with networking and storage vendors. You name it, they test it: Etherent, IPv6, data center bridging, Fibre Channel, SATA, Wi-FI.

UNH-IOL is a neutral, third-party testing lab, staffed by engineers and UNH students. Each testing program the lab has in place corresponds to a consortium of vendors who support the lab’s activities in exchange for certification that their products interoperate with standards and with other vendors’ products.

It’s not just hardcore, behind-closed-door tests at UNH-IOL. The lab will also host multiple events throughout the year, such as “Plugfests,” group-testing events where multiple vendors will get together in a room full of tables and cables and test their equipment against each other for interoperability according to a specific testing plan. These plugfests consist of a week of 12-hour days of testing. In a world where vendors like to cut each other down in the press, it’s nice to think of them all sitting in a room together for hours at a time, sweating over who interoperates best with the competition.

All boxes big and small

A Liknsys router and a Cisco CRS-1 core router, both tested for IPv6

Testing for IPv6 interop: Linksys and Cisco CRS-1 routers

It’s not just the big enterprise and service provider gear that gets tested here. Home routers, hard drives and 3G dongles are tested, too.

In the IPv6 interoperability lab space, UNH-IOL staff are testing for compatibility and interoperability among anything that is running IPv6. That includes PCs, dongles, printers, embedded operating systems, routers and switches. To the left you can see Cisco’s CRS-1router, which typically sits in a service provider core, being tested for IPv6 interoperability. And further up the rack you can see a tiny little Linksys E4200 home router that is also being tested at the lab.

“When building products, IPv6 can be complicated,” said Tim Winters, manager of IOL’s IPv6 testing lab. “These [vendors] need to talk to each other.”

The IPv6 testing lab at UNH-IOL has played an important role for vendors in recent years, especially those who are suppliers to the federal government, which has been extremely aggressive with deadlines for running native IPv6 on federal agencies’ networks.

The Racks of the living dead

Ethernet has been commercially available for more than 30 years now. In that time a lot of Ethernet vendors have come and gone, but a lot of their switches remain behind. Ethernet is still Ethernet. A vendor’s disappearance from the market doesn’t necessarily mean that it will disappear from every network. Somewhere out there, Compaq Fast Ethernet switches still lurk. And some network engineer, somewhere, is probably still proudly maintaining a few Bay Networks switches in a wiring closet.

A stack of old Bay Networks switches, among other vendors

A stack of old Bay Networks switches, among other vendors

So here’s the thing. if you’re a vendor building a state-of-the-art switch, you still need to make sure it will interoperate with everything out there. You never know what zombie vendors lurk in your customer’s closets. For this reason, UNH-IOL maintains what I like to call “racks of the living dead,” old Ethernet switches that its Ethernet testing lab uses to test for interoperabiilty with everything. Above to the right you can see just a portion of one of the racks of old switches the UNH-IOL maintains, filled with several Bay Networks switches and a Nortel switch. To the right you’ll see a single Fast Ethernet HP ProCurve switch. Some of the vendors in this rack I had never heard of.

Jeff Lapak, manager of testing for 10, 40 and 100 Gigabit Ethernet (GbE) said UNH-IOL tests for more than just Ethernet interoperability in switches. His team will check voltages on individual ports or the PHY (physical layer) coding on each switch.

What’s next? Gigabit Wi-Fi, OpenFlow testing?

UNH-IOL engineers are always keeping an eye on the latest advances in the networking industry, developing testing programs for new technologies as products start to hit the market.

One trendy technology that could find its way into the UNH-IOL labs soon is OpenFlow, the open source protocol for software-defined networking that has enjoyed a lot of hype recently.

Winters said OpenFlow is still in it’s early stages and interoperabiity testing is not as important to vendors right now. Most early solutions are somewhat proprietary. he predicted that when enterprises start buying second-generation OpenFlow products, then they will start demanding interoperability from their vendors. That’s when testing will kick in.

Gigabit Wi-Fi? The same thing, Mikkel Hagen, who manages testing for Wi-Fi, among other technologies, said the next-generation Wi-Fi technologies which are just starting to come to market, are too new for UNH-IOL to have a testing program in place for it. Just one vendor has even chatted with the lab’s staff about the technology. And no enterprise-grade products are expected until the second half of 2012 or early 2013.

February 2, 2012  11:12 AM

HP Networking makes OpenFlow generally available on 16 switch models

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

HP Networking announced today that OpenFlow support is generally available on 16 different switches within it’s HP 3500, 5400 and 8200 Series.

Openflow, an open source protocol, enables software-defined networking by allowing a server-based controller to abstract and centralize the control plane of switches and routers. It has enjoyed plenty of buzz since Interop Las Vegas last year.

HP has had an experimental version of OpenFlow support on its switches for a couple years, but it was only available through a special research license. Saar Gillai, CTO of HP Networking, said his company is making OpenFlow generally available because HP customers are demanding it for use in production networks.

This position contrasts sharply with Cisco Systems’ apparent view of Openflow. At Cisco Live London this week, Cisco announced availability of new network virtualization technologies, including VXLAN support on its Nexus 1000v virtual switch and Easy Virtual Network (EVN), a WAN segmentation technology based on VRF-lite. But OpenFlow was not part of the discussion. In her keynote at Cisco Live, Cisco CTO Padmasree Warrior said her company wants to make software a core competency in 2012 and make networks more programmable, a key feature of software-defined networking. When asked where OpenFlow fits into this vision, Warrior said that software-defined networking is “broader than Openflow.” And Cisco VP of data center switching product management Ram Velaga said Openflow is “not production-ready.”

Gillai said HP is seeing a lot of interest in using OpenFlow in production networks. He said service providers are looking at using it to get more granular control of their networks. Enterprises want to use OpenFlow to make their data center networks more programmable. Particularly, enterprises that are using Hadoop to run large distributed applications with huge data sets are interested in using OpenFlow and software-defined networking for job distribution.

“We’ve spoken to customers who would like to set up a certain circuit with OpenFlow for the Hadoop shop to use at certain times of day,” Gillai said.

Of course, Warrior is right when she says software-defined networking is broader than Openflow. Arista Networks has been offering an open and non-proprietary approach to software-defined networking without OpenFlow for a couple of years.

But OpenFlow still has plenty of buzz, and big backers. HP’s support is significant, given its position as the number two networking vendor. And last week, IBM announced a joint solution with a new top-of-rack OpenFlow switch and NEC’s ProgrammableFlow controller.

IBM and HP. Those are two very big companies with a lot of customers.


January 26, 2012  1:42 PM

Meraki: from cloud-based WLAN to cloud-based networking

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

When start-up Meraki first hit the scene a few years ago, it was known as the cloud-based wireless LAN vendor, yet another player in a very crowded market. Today it’s repositioning itself as a cloud-based networking vendor, with an expanded portfolio aimed at competing directly with Cisco Systems.

“The dominant competitor we’re going after across all our products is Cisco,” said Kiren Sekar, vice president of marketing at Meraki.

Originally a pure WLAN player

Meraki first offered a unique solution: A wireless LAN that required only access points, but no central controller appliance. Instead, the access points would go to a Meraki cloud for control and management. Meraki’s cloud interface offers administrators configuration management, automated firmware upgrades, and global visibility into the managed devices.

The vendor has done pretty well in a booming wireless LAN market, listing Burger King, Applebee’s, and the University of Virgina as customers. Meraki’s approach offers low-cost network operations, since its cloud-based management interface is aimed at serving general IT administrators rather than experienced network engineers.

Now routers and access switches

Last year Meraki introduced a small line of branch router-firewalls, its MX series. Like it’s wireless line, the Meraki MX routers are managed through the cloud. Again, the cloud approach offers global views of ports across multiple sites, configuration management, alerting and diagnostics, and automated firmware upgrades. The firewall functionality also included application layer inspection, a key feature of next-generation firewalls.

This month, Meraki expanded its portfolio even further, adding MX boxes capable of connecting enterprise campuses and data centers. The routers feature two-click, site-to-site VPN capabilities and WAN optimization features such as HTTP, FTP and TCP acceleration, caching, deduplication and compression.

Also, Meraki launched a new MS series of Layer 2/3 access switches, including 24-port Gigabit Ethernet model and a 48-port 1/10 Gigabit Ethernet model, with or without Power over Ethernet (PoE). Again, these MS switches are managed through the Meraki cloud. The switches are obviously designed to compete head-to-head with the Catalyst 3750 series of switches from Cisco. These MS switches start at a list price of $1,199 for the 24-port, non-PoE switch. Combine that with ongoing licensing for the cloud-management support, and the total cost of ownership on the basic switch is about $1,400 over three years.

If a low cost of ownership value proposition on switching and routing (and WLAN) is important to you, Meraki can make a compelling case. However, the low-TCO sales pitch is starting to wear thin according to a lot of the experts I talk to. Networks are getting more complex, not simpler. Low-cost doesn’t ring bells in every IT department.

That’s why Meraki offers home-grown, advanced network services for no additional cost on its boxes. The MX router-firewalls come with WAN optimization features bundled in. Other vendors would require a license upgrade (or a separate appliance). They feature application-aware inspection and policy enforcement, something that usually requires a separate vendor. I can’t vouch for how these Meraki features compare to the WAN optimization capabilities of Riverbed Technology or the next-generation firewall capabilities of Palo Alto Networks and Check Point Software. But Meraki isn’t interested in competing with Riverbed, Palo Alto or Check Point. It’s going after Cisco.

“We view WAN acceleration as a way to differentiate ourselves from Cisco as opposed to a way to compete with Riverbed,” Meraki’s Sekar said. “For every company that has Riverbed, there are 10 who don’t, because they can’t absorb the cost or the complexity. But everyone needs a firewall.”

Is a low-cost, easily managed networking vendor something you’re looking for? Or do you still prefer to go for the higher-end products from your established vendors? Let us know.


January 11, 2012  12:52 PM

Big Switch Networks offers open source OpenFlow controller

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

Big Switch Networks is releasing an open source version of its OpenFlow controller. The controller, Floodlight, is available under the Apache 2.0 license.

In the emerging software-defined networking (SDN) market, where the OpenFlow protocol has generated a lot of hype, Big Switch is a prominent start-up. In an SDN network built with OpenFlow, the control plane of the switches and routers are abstracted into a centralized, server-based controller which defines flows for data forwarding based on a centralized view of the network topology. Big Switch, which hasn’t offered many details about the products it has in beta today, is presumed to be working on a commercial OpenFlow controller.

Why offer an open source version of the product?

“We see [software-defined networking] as a three-tier thing,” said Kyle Forster, Big Switch co-founder and vice president of sales and marketing. “At the bottom you have the data plane, the Ethernet switches and routers. The middle-tier is the controller, which is where Floodlight fits. The third tier is a set of applications on top of that controller. We play commercially in that application tier.”

In other words, when Big Switch starts shipping products, it will offer an OpenFlow controller, based on the open source Floodlight, with bundles of applications for running a software-defined network. That’s where the money will be made.

The applications that Big Switch and third-party developers can build on top of an OpenFlow controller can range from the rudimentary applications like multi-switch forwarding models and topology discovery to more advanced services, such as load balancing and firewalls.

Big Switch’s goal with the open source release is to get the code out into the public domain.

“By open sourcing that, you get two things. You get high quality code because it’s visible to everybody,” Forster said. “You also get a vast amount of community members downloading the thing and playing around with it. So it gets hardened very rapidly. It’s also useful for our partners. If a partner is going to build an application on top of our commercial controller, they want peace of mind that if they no longer want a commercial relationship with Big Switch, they have the opportunity to go down the open source path.”

Download Floodlight and let us know what you think in the comments. Or contact me on Twitter: @shamusTT


January 10, 2012  4:30 PM

Gigabit Wi-Fi previewed at CES

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

A number of vendors are showing off early demonstrations of gigabit Wi-Fi at the Consumer Electronics Show (CES) in Las Vegas this week. By choosing CES as the venue for these demos, the implication is clear. Vendors see home electronics as an early proving ground for gigabit Wi-Fi technologies. Think wireless streaming of HD TV from your broadband connection to any device in your house.

Broadcom Corp. introduced 802.11ac chips at CES, what it describes as 5th generation (5G) Wi-Fi The top-line chip, the BCM4360, supports a 3-stream 802.11ac device that can transmit at theoretical top speeds of 1.3 Gbps. Broadcom is highlighting the potential of these chips for multimedia home entertainment. Consumer networking and storage vendor Buffalo Technology is showing off an 802.11ac router at the show. These vendors aren’t offering many specifics on when gigabit Wi-Fi products will be commercially available, but it’s safe to assume products will hit store shelves in the second half of this year.

802.11ac (which transmits at 5 GHz), and 802.11ad (which transmits at 60 GHz with theoretical throughput as high as 7 Gbps) are both IEEE standards that are still in development. A ratification of those standards won’t come before 2013. But the Wi-Fi industry has never waited for the final standards before. That’s what the Wi-Fi Alliance is here for: to make sure that all Wi-Fi technology, both pre-standard and post-standard, are interoperable and comply with the evolving standard.

The Wi-Fi Alliance, the industry association that certifies Wi-Fi technologies for interoperability, said it is developing a new interoperability program for both 802.11ac and 802.11ad right now.

You should probably expect enterprise wireless LAN vendors to start demonstrating their own 802.11ac and 802.11ad products and prototypes in the next few months. Interop Last Vegas might be generate a lot of gigabit Wi-Fi noise this year.


January 9, 2012  12:13 PM

The IT jobs market: IT workers with business skills needed

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

According Foote Partners LLC, the technology labor market analysis firm, the United States added IT jobs in 2011, but companies are looking for a different kind of IT worker than they were in the past.

Foote CEO David Foote said that employers are looking for “hybrid” workers who have a combination of technical and business skills and experience.

“The broader trend continues to be employers hiring hybrid IT-business professional[s]  with combinations of both business and technology knowledge, experience, and skills sets unlike those found in traditional IT organizations,” Foote said in a press release.

Foote examined the latest batch of U.S. employment numbers by the Department of Labor Bureau of Statistics. Foote tracked four key job segments that apply to the IT employment market. Two subgroups fall under the “Professional/Technical Services” category: “Management & Technical Consulting Services” and “Computer Systems Design & Related Services”. Those groups posted a net gain of 5,500 jobs in December, versus growth of 9,600 jobs in November.

Foote noted that two other tech segments under the “Information” jobs category, “Telecommunications” and “Data Processing, Hosting & Related Services” posted a net loss of 4,300 jobs in December and 41,800 in 2011. This trend reveals that traditional IT jobs that require technical skills and little else are not as in-demand as the so-called hybrid business/IT worker.

So while the technology job categories have had an up and down year in 2011, Foote noted that the hybrid IT/business professional is a hot and growing area that is hard to track through U.S. employment numbers because they often get reported within non-technical job categories. Regardless, Foote recommends that unemployed and underemployed IT workers retool themselves for this new hybrid model.

“It’s going to be tough going for a lot of people in 2012 but we’re confident that the hybrid IT-business professionals will continue to be a bright spot in a tepid employment market as well as people who possess a number off technical specializations that address specific areas of perceived business risk and reward. [IT workers] really need to study the employment market closely to spot opportunities that may be available to them right now or perhaps with some additional skills acquisition. For example, there are plenty of jobs out there right now that employers are having trouble filling because they can’t find suitable candidates.


December 8, 2011  5:04 PM

11 ways to kick IT shop sexism in the ass (Men can do it too!)

rivkalittle Rivka Little Profile: rivkalittle

A conversation in an online forum during the Women in Tech session at the Large Installation System Administration (LISA) 2011 conference in Boston this week may have summed up the reason for ongoing sexism in IT shops.

User 1: “Are you going to the Women in Tech session?”

User 2: “No, I am not a woman.”

When this was read out loud to the mostly female audience Wednesday there was a collective groan. Why? By that point in the conversation, many women had agreed that often men make things uncomfortable for women in the workplace and don’t even realize it’s an issue.

But almost every woman in the workshop had a war story to share. One had been told she couldn’t be a sys admin because she was too physically weak to carry equipment. Another feared lashback for taking maternity leave. One mentioned the all time standard – cat calls on the job. Wow.

It was suggested that IT shop sexism may be inherent to “IT engineer culture.” Engineers, after all, are known for their snark, need to outsmart each other, and what one woman dubbed “pub humor.”

Turning that snark toward women may just be par for the course and shouldn’t be taken personally, another suggested.

The problem with accepting this as part of the “culture” is that it implies women have to learn to live with it – that men don’t have to change. But that didn’t sit well with most in the room.

If we decide one of our organizational goals is to have more women and more diversity, but we want to be jerks, those are two irreconsible goals,” said one panelist. Another added, “Pub humor should stay in the pub.”

The bottom line is that organizations must take specific steps to stem sexism in IT in order to diversify the workplace. They’ve got to do that by changing the way they talk to and about women. They must create meeting space that is conducive to equal sharing. And most importantly, they’ve got to offering mentoring to women, if for nothing else to increase the dismally low number of women in IT organizations.

By the end of the session, panelist Máirín Duffy of Red Hat had whipped up 11 ways to make the IT organization a friendlier place for women (posted on the Sanguine Neurastheniac blog).


December 8, 2011  11:32 AM

Kaspersky Lab, WebSense make anti-censorship statements

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

Technology vendors are always putting together industry consortiums to promote the development and adoption of new technologies or to advance their general business interests. But sometimes they make these moves out of conscience.

Earlier this week information security vendor Kaspersky Lab announced that it would withdraw from the Business Software Alliance (BSA) next month because of the group’s support for the Stop Online Piracy Act (SOPA) SOPA, for those who don’t know, is legislation pending in the U.S. House of Representatives whose purpose is: “To promote prosperity, creativity, entrepreneurship, and innovation by combating the theft of U.S. property and for other purposes.”

Eugene Kaspersky, blogged about his reasons for his company withdrawing BSA. He noted that the law will require that anybody infringing on U.S copyrights will be “cut off from the Internet by all search engines, ISPs, credit systems and other levers of control, without exception.”

Kaspersky objects to the fact that this legislation would impose American copyright enforcement laws beyond its own borders via its domination of the Internet, while doing nothing to protect the copyrights of non-Americans.

He also objects to the broad definition of copyright infringement and its potential for abuse, a notion that many other critics have voiced.

“Copyright infringement” is understood in its broadest sense here: an amateur movie which includes quotations from a copyright-protected script or soundtrack would qualify, so would a home movie filmed while Kung Fu Panda played on a TV screen in the background. Some more nice examples here. Any use of any ‘intellectual property’ object is regarded as a violation resulting in a blog – or even an entire web resource – being closed down.

If we accept this law, hundreds of thousands of lawyers will suddenly appear out of the woodwork because almost any website can be accused of copyright infringement! This law will lead to major legalized extortion.

Critics of SOPA, like Kaspersky, believe that the legislation will lead to broad censorship by aggressive copyright enforcers. And so Kaspersky Lab is withdrawing from the BSA out of protest for its support for SOPA.

Meanwhile, another vendor has joined a different consortium also in an effort to combat network censorship. This morning network security vendor Websense announced that it was joining the Global Network Initiative (GNI), an organization established “to protect and advance freedom of expression and privacy in the ICT [Information and Communication Technology] sector.”

Websense has been outspoken in recent months about its efforts to keep its network filtering technology out of the hands of governments that would use it to censor and spy on citizens. This issue has been in the news in recent months because of revelations that the Syrian government was using Blue Coat filtering technology in its efforts to shut down anti-government protests. In response to that news, Websense recently posted that it dealt with a similar issue in 2009 when it discovered that a Yemeni ISP had somehow acquired Websense technology and was using it to censor the Internet. Websense disabled its software remotely to prevent the censorship from continuing.

In its statement on this issue, Websense wrote:

American software companies should take strong measures to prevent the misuse of their technologies where it would be harmful to the public good. And it’s long overdue for American technology companies to step forward and address this problem.

Now Websense has taken another step to raise awareness about this by joining the GNI, essentially agreeing that it will abide by the GNI’s principals to promote an open and free global Internet.


October 21, 2011  8:28 AM

RIM: Move on from BlackBerry

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

Sometimes it pays to move on, no matter how much you have invested in something.

This summer Freakonomics Radio ran an episode titled “The Upside of Quitting,” which poked holes in the old adage “winners never quit and quitters never win.” Many people, the program argued, are unable to recognize that they have committed themselves to an endeavor that is failing. The more “sunk costs” someone has in such an endeavor, the less likely he or she is to give up on it.  No matter how hard it might be to admit it, sometimes it pays to just walk away and try something new.

And here we have Research In Motion (RIM), inventor of the once mighty BlackBerry, so popular a device that users dubbed it the “CrackBerry.” The BlackBerry was THE enterprise mobility device of the pre-iPhone era. A reliable platform for mobile email, contacts and calendars that offered mobility managers centralized control and rock-solid security, the BlackBerry made RIM a tech superpower.

That era of dominance is over. The ever-steepening decline of the BlackBerry, along with recent disasters like RIM’s global service outage, have a lot of people writing RIM obituaries. It’s prompted me to ask myself: Is it time for RIM to walk away from the BlackBerry?

RIM was almost too successful with the BlackBerry brand. The device is a household name while no one aside from IT managers and tech media know who RIM is. Mainstream marketing of any RIM device is pegged to the BlackBerry brand, not RIM.  RIM is a BlackBerry company. What else can it be?

We may find out the answer to that question soon. Android and Apple iOS devices have destroyed the BlackBerry’s share of the consumer mobile device market, and now it’s eating into RIM’s sweet spot: Enterprise mobility. Enterprise Management Associates (EMA) just announced that more than 30% of large enterprises (10,000+ employees) who are current BlackBerry users plan to migrate to a different platform within the next year. In its press release, EMA said:

“This represents a a significant reduction from the platform’s current domination of the large enterprise market space with 52% of mobile device users in that demographic actively using a BlackBerry device as part of their job function.”

RIM’s mobility architecture remains sound (despite the recent outage) but the company has struggled to keep pace with innovation in the device market. When Apple upended the smartphone industry with the iPhone in 2007, RIM responded with the BlackBerry Storm, an ill-fated try at a touchscreen smartphone that failed to catch on.

Then Apple’s iPad blew up the touchscreen tablet market and RIM responded with the PlayBook, which enjoyed strong early sales but got panned by gadget reviewers who said the software wasn’t fully baked. They also questioned RIM’s requirement that PlayBook users tether the tablet to a BlackBerry via Bluetooth in order to access native email and calendar applications. A nice security feature for enterprise IT, but ultimately limiting to users who were already impressed by the elegance of the iPad and some of the better Android tablets. Amid news that retailers were slashing PlayBook prices last month, gadget bloggers jumped on speculation by an investment analyst who suggested RIM had given up on the device, a rumor that RIM vehemently denied.

Then came this month’s service outage which turned 70 million BlackBerrys into bricks for several days. This has been a PR and customer service disaster, which prompted publications to come up with cute headlines like “RIM’s Outage: Nail in Coffin?” and “Is Research In Motion the walking dead?

It’s clear that the BlackBerry is in serious decline. Does it pay for RIM to stick it out and keep investing in it? This week at the BlackBerry DevCon America conference, RIM unveiled BBX, its next-generation device operating system. BBX is a combination of BlackBerry OS and QNX (the PlayBook tablet operating system). In a market where Windows Phone 7, Android and Apple iOS are all winning over users, does it make sense for RIM to evolve the BlackBerry OS like this? We saw Palm try to do this with WebOS. That didn’t go so well. Nokia walked away from Symbian and embraced Windows Phone 7. Should RIM walk away from BlackBerry?

How would you do that…. give up on the brand that defines your company? At this point, is it the BlackBerry user experience that RIM can hang its hat on? Or is it its middleware (BlackBerry Enterprise Server) and its network operating centers (NOCs)? Is RIM’s strength in its devices or its architecture?

Last May RIM announced that it was extending BlackBerry Enterprise Server support to Android and iOS devices. Perhaps that’s where RIM’s future lies. Incorporate non-BlackBerry devices into the architecture that won the hearts and minds of IT managers everywhere. Build value there. Sink R&D into that, not the next-generation BlackBerry. It’s not clear that going in that direction will be enough. The market for a mobility architecture might not be as large as one for a hot, new smartphone, but at least it’s a new direction that might work. It’s just a question of whether RIM wants to let go of device that it has so much invested in. And BlackBerry needn’t give up on devices, either. Instead, it could develop Android or Windows devices that are completely tied into the RIM architecture? Can RIM do that? Does it want to?

Sometimes it pays to quit. It doesn’t have to mean defeat. It can mean that you’ve decided to fight another battle that you think you can win.


October 3, 2011  1:28 PM

Junosphere Lab: Juniper’s cloud-based lab for enterprise customers

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

Juniper Networks today introduced a new version of its cloud-based network engineering lab, Junosphere. The new version, Junosphere Lab, gives enterprises and services providers cloud-based access to images of Junos, the firmware that runs on the company’s routers and switches. The service allows network teams to model, design and test networks virtually before deploying a physical one. It will also allow engineers to learn and test the features of a Junos device virtually before buying or installing a physical device.

This is the second Junosphere service offered by Juniper. Last May the company introduced Junosphere Classroom, a service designed for university network engineering classes and Juniper certification training partners. At the time of the Junosphere Classroom release, some networking pros complained that the service wasn’t tailored to the needs of engineers and enterprise networking teams who need to learn the features of Junos and design and test Juniper networks without building a physical Juniper-based network lab.

Junosphere Lab, which features some front-end, user interface changes tailored to enterprise and service provider customers, should address some of the complaints that enterprise engineers had about the original Junosphere offering. Juniper has also partnered with some other vendors to provide virtual instances of network design and testing technologies in the Junosphere cloud. Those partnerships include testing hardware vendor Spirent, network planning and design software vendor WANDL (Wide Area Network Design Laboratory), IP/MPLS planning and traffic engineer software vendor Cariden Technologies and NetFlow and route monitoring and analysis vendor Packet Design.

If an enterprise has an in-house network lab, the network team can connect that lab to Junosphere Lab to expand the scale of the company’s internal lab capabilities and allow them to emulate how their production network might function with Junos versus an incumbent vendor.

The service is priced for an enterprise or service providers budget. Juniper offers access the cloud-based lab at $5 per Junos instance per day. A network engineer could design and test a virtual Juniper network with 20 nodes for $100 a day, for instance.

For the motivated network engineer who is seeking an opportunity for independent self-study, the home network lab might remain the best option. Junosphere is not intended for self-study, according to Judy Beningson, Juniper’s vice president and general manager for Junosphere and related products and services. She said the original Junosphere Classroom remains the best option from Juniper.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: