LAS VEGAS — As I was ambling toward an 8 a.m. meeting at Interop 2012 this week, I noticed a young woman dressed rather provocatively. She took the escalator just ahead of me and her appearance was striking. I thought to myself, “Wow, she is dressed inappropriately for this show!” A tiny black top and short, short, short black shorts that left just a minimum to the imagination.
At the top of the escalator she was greeted by an identically-dressed young woman. That’s when it clicked in my sleepy head. Booth babes!
Booth babes (or spokesmodels if you prefer a less pejorative term) are ubiquitous at trade shows that draw a heavy male audience. Car shows. Computer graphics conferences. IT shows. These are women hired for their looks. They are sex objects. There’s no getting around that fact. And the companies that hire them do a grave disservice to the industry.
I can appreciate an attractive woman like any other guy. However, as a journalist who covers the IT industry, I lose a lot of respect for companies who rely on them. I passed by booths belonging to Net Optics and Barracuda Networks several times on my way to other meetings and I did not fail to notice the platoon of scantily clad women hired to lure men close enough to scan their Interop badges. Anything for a sales lead, right? Are those sales leads really worth it when you’re promoting sexism?
I’m probably in the minority on this issue, but I am sick of the atmosphere created by these smokesmodels (And I’m not just saying that because my girlfriend might be reading this). I refuse to stop at any booth that features these women. These companies should find a better way to attract potential customers than fast cars and beautiful women. Leave cheap tricks like that to the auto industry. I don’t want to be associated with something like that, so I’ll stay clear. There are plenty of companies who rely on industry professionals to engage the media and potential customers at the show.
Moreover, there are a lot of professional women on Interop’s showroom floor — marketing professionals, engineers, IT managers, technology executives. Men outnumber women ten-to-one in this industry, and we all know there are a many reasons for this. Reason number one is staring you right in the face when you let some teenage girl with a winning smile and fishnet stockings scan your badge. They might have a nice smile, but companies that hire these models are creating a hostile work environment for our female colleagues. That’s not just some politically correct term. It’s a fact.
Enterprises tend to be conservative with their data center networks, which makes Juniper Networks’ efforts to displace legacy architectures with its new data center network fabric, QFabric, a challenge. Juniper needs public reference customers who have deployed a full QFabric solution in order to attract risk-averse prospective customers.
In yesterday’s first quarter 2012 earnings call, Juniper revealed the first customers who have deployed a full QFabric system. According to a transcript of the earnings call via Seeking Alpha, Stephan Dyckerhoff, Juniper’s EVP, Platform Systems, said:
We now have over 150 customers for the QFX product line, and are seeing them embrace the solution in a variety of different configurations, ranging from top-of-rack installations to full fabric deployments.
We are pleased to have the first full fabric deployments running in live production. Those deployments include Qihoo 360 in China and Australia-based Oracle [Orica]. In Q1, we also had a QFabric win in Europe at Jan Yperman hospital in Belgium. Customer feedback overall is good, and we are encouraged with the pipeline we are building.
Juniper has caught some flack in the industry for the slow roll-out of its full data center fabric and a general lack of reference customers who have deployed a full fabric. Most initial customers of QFabric have been deploying the QFX3500 as a traditional top-of-rack switch. This device operates as a “node” in a full QFabric solution. It’s reasonable to assume that all of those QFX3500 customers are at least considering a full-fabric deployment, but it’s not guaranteed.
Juniper did put me on the phone with a healthcare-focused cloud provider (Codonis) a few months ago to discuss its plans for a full-fabric QFabric installation, but that implementation was mostly in the planning stage. Juniper has also announced that Deutsche Boers (operator of the Frankfurt Stock Exchange), Thomson Reuters, Bell Canada and Terra (Brazilian online media company) are all designing full-fabric deployments of QFabric, but none of those companies have announced whether they have put the system into production yet.
The lack of North American customers with production deployments is troubling. Juniper needs reference customers that U.S. companies can talk to. Qihoo 360 is a Chinese web security software developer. Will a stateside network architect be impressed by that reference? I doubt it. Jan Yyperman Hospital is a former Nortel Networks reference customer, so I’m assuming Juniper displaced a legacy Nortel network in its data center. That could be a promising reference when it eventually gets QFabric in production. Oracle Australia is a nice win. Will Oracle adopt the technology elsewhere? Network architects will want to know. Orica is an Australian chemical company. (A transcription error by Seeking Alpha suggested Oracle is a customer).
A Goldman Sachs analyst on the earnings call asked Juniper to specify how many of its 150 QFabric customers have deployed a full fabric. Juniper Dyckeroff declined to be specific:
…[W]e have a mix of deployments for the customers who have adopted the QFX product line. They range from top-of-rack to full fabric. The reason they adopt the product line is because we have strategic alignment with them on the architecture that they want to deploy going forward. And so the focus for us is to give them a great experience as they adopt the key pieces of technology and there’s a good number of them that actually adopt the full fabric…
Juniper’s slow roll-out of QFabric has been unfortunate, especially since much of the early hype surrounding the technology has been usurped by the rise of software-defined networking. The two technology’s aren’t necessarily interchangeable, but web-scale companies and cloud providers (a sweet spot for QFabric) are looking hard at software-defined networking, which has got to be a challenge for Juniper.
Many of the bloggers who have analyzed and reported on the news of Insieme, Cisco’s latest spin-in, have talked about how the company’s formation is a morale-killer for Cisco employees. The concern is justified. Cisco’s spin-in approach enriches a select number of employees recruited to join the likes of Insieme and other past spin-ins like Nuova Systems, leaving engineers who work on products like the Catalyst line or the Aironet products to wonder when their big payday will come. That can lead to a brain drain as engineers bolt for a company that gives them a better opportunity to shine.
But what about the technology strategy that is coalescing around Insieme and the other moves Cisco is making with software-defined networks? Shouldn’t the bigger concern be that Cisco might be making a strategic blunder?
Last week Cisco circulated an internal memo that confirmed for employees its $100-milllion investment in Insieme, the spin-in that will form part of Cisco’s “build, buy, partner” strategy for software-defined networking (SDN). The Cisco memo, published by Om Malik, claims that the networking industry hasn’t yet settled on a definition for SDN, let alone a value proposition:
Because SDN is still in its embryonic stage, a consensus has yet to be reached on its exact definition. Some equate SDN with OpenFlow or decoupling of control and data planes. Cisco’s view transcends this definition.
As Brad Casemore points out, here is Cisco’s opening salvo. It’s going to resist, or at least play down the value of, one of the core attributes of software-defined networking: the decoupling of the control and data planes. There is a little bit of cognitive dissonance with this statement. This decoupling of the control and data planes is an essential foundation of SDN. It enables centralized, flow-based networking. It enables programmability. It enables organizations to deploy third-party applications on a network through an SDN controller. But Cisco claims to transcend this idea. This vague dismissal should be troubling to SDN proponents.
The memo goes on to quote Cisco CTO Padmasree Warrior to support this notion:
“If you ask five customers what SDN means to them, you may get five different answers. Customer motivations and expectations are different based on their business problem or deployment scenario,” Warrior says.
It’s true that some people new to the subject initially perceived OpenFlow as an architecture, rather than just a protocol that enables SDN. Once they get educated on the subject, few networking pros express much confusion on the matter. However, is Cisco’s view really transcending the current SDN definition. This memo muddies the waters a bit by claiming that Cisco’s Nexus 1000v virtual switch is an example of SDN?
While SDN concepts like network virtualization may sound new, Cisco has played a leadership role in this market for many years leveraging its build, buy, partner strategy. For example, Cisco’s Nexus 1000V series switches—which provide sophisticated NX-OS networking capabilities in virtualized environment down to the virtual machine level—are built upon a controller/agent architecture, a fundamental building block of SDN solutions. With more than 5,000 customers today, Cisco has been shipping this technology for a long time.
Sure, the Nexus 1000v introduces a version of SDN to the extreme edge of a virtualized data center, but it doesn’t come close to achieving the network agility and programmability promised by software-defined networks enabled, or not enabled, by OpenFlow. What about the rest of the data center LAN, filled with physical switches that are so constrained that both the IETF and the IEEE are re-engineering Ethernet in order to eliminate a legacy protocol like spanning tree?
Proponents say that SDN has the potential to eliminate spanning tree by defining flow routes centrally in a server-based controller, thus eliminating the risk of loops. Why upgrade to Shortest Path Bridging (SPB) or Transparent Interconnections of Lots of Links (TRILL), when an SDN network that can do it? If you want to use TRILL or SPB in your data center network today, you need to upgrade to the newest generation of your vendor’s switches, and you won’t be able to reverse course midway through. These vendors won’t play together. You can’t mix Brocade’s iteration of TRILL with Cisco’s. You can’t mix Avaya’s iteration of SPB with Cisco or Brocade. You probably wouldn’t want to mix vendors in your data center, but you also want investment protection with these new data center fabrics, don’t you? Five years from now when you need to refresh the server access layer, you’re locked into whatever vendor you’ve chosen.
You can ditch spanning tree in an OpenFlow-based SDN network using any combination of switches that support OpenFlow. Heck, Nicira Networks claims its product can get you there without even using OpenFlow switches. Just leave your legacy network in place. You know who is using OpenFlow? Google. You know who is using Nicira? eBay. Fidelity. Rackspace. NTT. Concerns about scalability with SDN may be justified, but some heavyweight companies have put it into production.
But never mind that for now. The Cisco memo expounds on the virtues of open, programmable networks (something that Arista Networks has offered for a couple years now). Toward the end, the memo lifts the veil off of Cisco’s SDN approach.
“Our strategy is to continue to offer choices to our customers so that they are not forced to go down a single path,” Warrior says. “We have a multipronged approach that goes beyond current perceptions of SDN, leveraging business-based use cases as building blocks so that we achieve architectural consistency and bring to bear the richness of all our capabilities.”
Warrior adds that Cisco already builds a lot of intelligence into its network silicon and software. Making them open and programmable will further unlock the value, while enabling further application awareness.
I will give Cisco credit here. the industry needs more “business-based use cases” for SDN. Midsized enterprises and even many large enterprises do not need SDN today. The networking pros at these smaller companies who ask me about SDN are interested in the technology, but mostly they just want to stay current with technology. They don’t need it. Today the emerging SDN market is focused on serving the needs of larger enterprises and web-scale companies. Broader business cases for the technology are years away. Many SDN start-ups are focusing on cloud providers and web giants rather than enterprises.
However, the mention of network silicon above (translation: ASICs) worries me. Here we have Cisco saying that it will make its ASICs and its software (IOS, NX-OS) open and programmable. Just how open and programmable will Cisco’s technology be? Look at this job posting for a software engineer at Cisco (It may not last long. It’s been scrubbed of certain details since I first reported its existence a couple weeks ago. This and another job posting (which disappeared from Cisco’s website a few days ago) made many references to a ConnectedApps team that is developing APIs for a software development kit (SDK) that will open up Cisco’s technology to third-party developers as part of a SDN initiative.
Just how open and programmable will an initiative based on APIs be? This doesn’t sound like an API for OpenFlow. It sounds like something else, given Cisco’s downplay of OpenFlow. APIs are a way to allow third party developers to hook their software to another vendor’s proprietary software. There’s nothing particularly open about it. SDN is about more than hooking third-party software to the edge of Cisco’s black box, whether that black box is in the form of software or an ASIC. SDN is what it is: Networks defined by software rather than hardware. How do you do that? By opening up the black box of networks and letting engineers build their networks in new ways. There is a control plane and there is a data plane. SDN decouples them and opens up the network to a whole new world of possibilities. It’s as simple as that.
In a few years, more IT organizations will want an open, software-defined network. Cisco needs to find a way to be relevant in such a world. APIs won’t cut it.
Keeping up with emerging start-ups in the software-defined networking (SDN) market is becoming a full-time job.
Most of the SDN buzz centers on Layer 2/3 networking. That’s what is dominating the agenda at this week’s Open Networking Summit. However, a smaller group of start-ups are focusing on SDN at Layer 4-7.
Today network engineers virtualize Layer 4-7 services by deploying software images of leading network appliance vendors on x86 server hardware. These software images, often labeled as virtual appliances, are available from several WAN optimization and application delivery controllers vendors, for instance.
Enterprises achieve scale with this approach by adding more virtual appliance images. However, bottlenecks will remain inevitable.
“The real problem here is the operating system itself,” said Steve Georgis, CEO of LineRate Systems, a new start-up that specializes in virtual Layer 4-7 services. “Linux was designed to be a general purpose OS, not a network OS. The network stack spends a lot of time managing network connections. Every time you add a network connection, the amount of time that stack spends on managing connections grows and it can spend less and less time managing the actual packets. As you scale up to the thousands of simultaneous connections, the operating system is left with very little time to do any real work. You run into pretty dramatic bottlenecks and throughput falls off quickly.”
Some enterprises will eliminate these bottlenecks by attaching a network acceleration module to a server to offload some of the processes that can overwhelm a server’s CPU, like TCP termination on an application delivery controller. Unfortunately, once you add these modules, you are pretty limited in how you deploy Layer 4-7 services. You can ‘t stand-up a new application delivery controller just anywhere. You have to put it on a server with the module.
LineRate Systems emerged from stealth mode today with a new acronym: SDNS (Software-defined network services). Its technology, the LineRate Operating System (LROS), is a re-engineered network stack for a Linux kernel that enables wire-speed throughput on a Linux server. Georgis claims that this can deliver 20 to 40 Gbps of network processing capability on a commodity x86 server with extremely high session scalability (hundreds of thousands of full-proxy Layer 7 connections per second and more than 2 million concurrent active flows).
LineRate has done some additional software engineering under the hood, including some work to eliminate blocking among cores within a multi-core CPU.
On top of this LROS, LineRate is offering LineRate Proxy, a product that operates as a full proxy for Layer 4-7 services on commodity server hardware. It includes several features: load balancing, content switching and filtering, SSL termination/origination, ACL and IP filtering, TCP optimization, DDoS blocking and an IPv4/IPv6 translation gateway.
Georgis said LineRate will develop more functionality in security, network monitoring, and Layer 7 switching in the future. The company is initially targeting cloud providers, but it expects to develop an enterprise market, particularly among companies that are building private clouds.
Application-aware firewall vendor Palo Alto Networks has filed for an IPO that could signal big competitive trouble for Cisco and Juniper Networks.
Though Palo Alto has not yet turned a profit (the company reported a loss last year of $12.5 million), it more than doubled revenue to $119.6 million in 2011 from $48.8 million in 2010.
Many believe that growth came from customers that couldn’t find comparable features in Cisco and Juniper products and jumped ship. Blogger Brad Reese points out that while Palo Alto’s revenues soared +141% in the six months ending January 31, Cisco saw a revenue increase in the same time period of only +7.7%.
If Palo Alto’s gains have been direct losses for Cisco and Juniper, things only stand to get worse if Palo Alto goes public. After all, many large enterprises are hesitant to invest millions in a company that isn’t public and financially stable.
“No one is going to spend $20 million on a product from a company that isn’t public,” said one engineer at a multinational consulting firm, who recently made an initial Palo Alto test investment. “When I went to do a first pass [on buying firewalls], it was a half million bucks. It’s a big commitment to change a firewall product. You’re signing on for a long-term relationship with subscription services in addition.”
Even more threatening to Cisco and Juniper is that this engineer – like others – have found Palo Alto technology superior to the competition.
“When I told the the other vendor that I wanted IDS, antivirus and content inspection, they looked at me like I had three heads. When I said that to Palo Alto, they said, ‘of course you would do that, why wouldn’t you?’” he said. “If you look at performance statistsics on a box from another vendor, they tell you what the performance is on a per-service basis, but they don’t tell you what happens when you turn all services on. That’s not the case with Palo Alto.”
That’s likely because Palo Alto has created next-generation, application-aware firewalls from the jump — never having to adapt legacy technology to do new tricks. The company was founded in 2005 by Nir Zuk, who had been CTO at NetScreen before it was acquired by Juniper. As some tell it, Zuk went to the Juniper board with the message that firewalls had to become application-aware. Juniper eventually followed that advice, but not soon enough for Zuk, who founded a company based on the idea that next generation firewalls should offer application-level monitoring with transaction detail and constantly updated signatures. Since then, Gartner has dubbed next-generation firewalls as mainstream and Cisco recently announced the launch of an application-aware firewall. Juniper has also announced similar features. It remains to be seen whether the more established vendors can catch up.
I think it’s safe to say that Cisco’s answer to software-defined networking and OpenFlow is starting to take shape.
Om Malik reports that Insieme (or Insiemi, depending on whom you talk to), a mysterious spin-in subsidiary of Cisco Systems, is aggressively recruiting engineering talent from hot start-ups, including Arista Networks, Big Switch Networks and Nicira Networks. Om says that Cisco/Insieme apparently tried and failed to poach from Arista, but it succeeded in grabbing four executives from Nicira and one from Big Switch, two companies that are major names in the emerging software-defined networking (SDN) and network virtualization space.
As Brad Casemore’s excellent blog from a couple weeks ago points out, Cisco has a history of propping up spin-in start-ups, which use Cisco cash (and Cisco employees) to build new technology outside its traditional bureaucratic product development structure. Cisco usually maintains a majority stake in the spin-in, with an option to buy the rest. When the spin-in company has something fully-baked, Cisco buys the entire company and welcomes back its former employees.
The last Cisco spin-in we saw was Nuova Systems. In 2006, Cisco staked about $70 million to start the company, then two years later bought out the minority owners (including several Cisco veterans who are reportedly also involved in starting Insieme). Nuova essentially built the first Nexus 5000 switch. As a subsidiary of Cisco, Nuova collaborated closely with its parent company so that the Nexus 5000 would be compatible with the Nexus 7000 series of switches, which were introduced shortly before Cisco spun Nuova back into the company.
Cisco/Insieme is reportedly dangling million of dollars to snatch talent from Nicira and Big Switch. By targeting these companies, Cisco is making it pretty obvious what Insieme is up to, even though it has been coy about its plans for SDN thus far. When I asked Cisco CEO John Chambers about software-defined networking last month, he said:
We absolutely view [software-defined networking] as a viable option for the future, either as a total architecture or segments of it. We probably spend a couple hours a week focused on our strategy in this market through a combination of internal development, partnering and acquisition. If we do our job right you’ll see us move on multiple fronts here. And at the right time, when it is a benefit to our customers, we will outline our strategy for them.
He added that any SDN solution from Cisco would be heavily tied to Cisco’s strategy to differentiate with its application-specific integrated circuits (ASICs), the custom network silicon it builds into most of its switches.
So here we have a company founded by Cisco’s hardware-centric veterans snatching up software ninjas from SDN vendors. What could they be up to? Do you have any ideas? Let us know in comment below.
Last summer, CompTIA promised to deliver 5,000 vouchers (valued at $400,000) to WWP, which in turn is offering career transition training courses to wounded U.S. service members at eight health care facilities and military bases across the country.
In 2011, 550 veterans sat for CompTIA exams through WWP with a 93% pass rate. They achieved certifications in CompTIA A+, Network+ and Security+. The partnership between WWP and CompTIA will continue through 2014.
Many of these newly certified veterans will go on to new careers in the private sector while others will take on new IT roles within the military
The BlackDiamond X8, which Extreme Networks first announced last spring, is now shipping. If you are struggling with density and oversubscription problems in your data center core, this chassis might solve it. The BDX8 packs 768×10 (Gigabit Ethernet) GbE ports or 192×40 GbE ports into a third of rack, at wire-speed.
In other words, you can pack 2,304 ports of 10 GbE into a single rack containing three BDX8 chassis. Other vendors might offer somewhat comparable density but only in an oversubscribed configuration. To get this amount of wire-speed 10 GbE ports with Cisco Nexus 7010 switches, you would need to fill six racks with 12 chassis. That’s a lot of capital expense, a lot of switches to manage and a lot of real estate in a data center. Also, note that the Nexus 7010 is a half-rack chassis. The 18-slot Nexus 7018 has higher density (768 wirespeed 10 GbE ports) but you’re only going to squeeze one of those into a single rack.
Extreme disclosed that Microsoft has been beta testing the chassis in the data center for its Executive Briefing Center in Redmond.
Prior to this release, Arista Networks’ 7508 chassis had the most impressive wire-speed port density (albeit only with 10 GbE). Arista’s 7508 packs 384 wire-speed 10 GbE ports into a chassis that is only 11 rack units with 8 I/O slots.
Then you have some of the newer data center fabrics, like Juniper’s QFabric, which you can’t really compare to the BDX8 from a pure speeds and feeds perspective, since Juniper positions an entire QFabric deployment as a single, logical switch chassis that has been exploded into scores of individual devices.
Not a lot of people need port density like this yet, let alone this kind of 40 GbE port density. And if they do need this kind of density, chances are they can live with oversubscription. While Extreme has a flashy flagship switch to show off, a lot of enterprises will be looking at Extreme’s overall network architecture, rather than just the impressive density. Cloud providers, high performance computing environments and financial services companies will give it a look. It leaves me wondering what will come next from competitors like Cisco, Dell-Force10, etc.
I recently visited the University of New Hampshire InterOperability Laboratory (UNH-IOL) for a behind-the-scenes tour. This 32,000-square-foot facility is the place where networking vendors large and small go in order to certify that their technologies interoperate with each other. The lab has more than 20 different testing programs which produce independent results via collaboration with networking and storage vendors. You name it, they test it: Etherent, IPv6, data center bridging, Fibre Channel, SATA, Wi-FI.
UNH-IOL is a neutral, third-party testing lab, staffed by engineers and UNH students. Each testing program the lab has in place corresponds to a consortium of vendors who support the lab’s activities in exchange for certification that their products interoperate with standards and with other vendors’ products.
It’s not just hardcore, behind-closed-door tests at UNH-IOL. The lab will also host multiple events throughout the year, such as “Plugfests,” group-testing events where multiple vendors will get together in a room full of tables and cables and test their equipment against each other for interoperability according to a specific testing plan. These plugfests consist of a week of 12-hour days of testing. In a world where vendors like to cut each other down in the press, it’s nice to think of them all sitting in a room together for hours at a time, sweating over who interoperates best with the competition.
All boxes big and small
It’s not just the big enterprise and service provider gear that gets tested here. Home routers, hard drives and 3G dongles are tested, too.
In the IPv6 interoperability lab space, UNH-IOL staff are testing for compatibility and interoperability among anything that is running IPv6. That includes PCs, dongles, printers, embedded operating systems, routers and switches. To the left you can see Cisco’s CRS-1router, which typically sits in a service provider core, being tested for IPv6 interoperability. And further up the rack you can see a tiny little Linksys E4200 home router that is also being tested at the lab.
“When building products, IPv6 can be complicated,” said Tim Winters, manager of IOL’s IPv6 testing lab. “These [vendors] need to talk to each other.”
The IPv6 testing lab at UNH-IOL has played an important role for vendors in recent years, especially those who are suppliers to the federal government, which has been extremely aggressive with deadlines for running native IPv6 on federal agencies’ networks.
The Racks of the living dead
Ethernet has been commercially available for more than 30 years now. In that time a lot of Ethernet vendors have come and gone, but a lot of their switches remain behind. Ethernet is still Ethernet. A vendor’s disappearance from the market doesn’t necessarily mean that it will disappear from every network. Somewhere out there, Compaq Fast Ethernet switches still lurk. And some network engineer, somewhere, is probably still proudly maintaining a few Bay Networks switches in a wiring closet.
So here’s the thing. if you’re a vendor building a state-of-the-art switch, you still need to make sure it will interoperate with everything out there. You never know what zombie vendors lurk in your customer’s closets. For this reason, UNH-IOL maintains what I like to call “racks of the living dead,” old Ethernet switches that its Ethernet testing lab uses to test for interoperabiilty with everything. Above to the right you can see just a portion of one of the racks of old switches the UNH-IOL maintains, filled with several Bay Networks switches and a Nortel switch. To the right you’ll see a single Fast Ethernet HP ProCurve switch. Some of the vendors in this rack I had never heard of.
Jeff Lapak, manager of testing for 10, 40 and 100 Gigabit Ethernet (GbE) said UNH-IOL tests for more than just Ethernet interoperability in switches. His team will check voltages on individual ports or the PHY (physical layer) coding on each switch.
What’s next? Gigabit Wi-Fi, OpenFlow testing?
UNH-IOL engineers are always keeping an eye on the latest advances in the networking industry, developing testing programs for new technologies as products start to hit the market.
One trendy technology that could find its way into the UNH-IOL labs soon is OpenFlow, the open source protocol for software-defined networking that has enjoyed a lot of hype recently.
Winters said OpenFlow is still in it’s early stages and interoperabiity testing is not as important to vendors right now. Most early solutions are somewhat proprietary. he predicted that when enterprises start buying second-generation OpenFlow products, then they will start demanding interoperability from their vendors. That’s when testing will kick in.
Gigabit Wi-Fi? The same thing, Mikkel Hagen, who manages testing for Wi-Fi, among other technologies, said the next-generation Wi-Fi technologies which are just starting to come to market, are too new for UNH-IOL to have a testing program in place for it. Just one vendor has even chatted with the lab’s staff about the technology. And no enterprise-grade products are expected until the second half of 2012 or early 2013.
HP Networking announced today that OpenFlow support is generally available on 16 different switches within it’s HP 3500, 5400 and 8200 Series.
Openflow, an open source protocol, enables software-defined networking by allowing a server-based controller to abstract and centralize the control plane of switches and routers. It has enjoyed plenty of buzz since Interop Las Vegas last year.
HP has had an experimental version of OpenFlow support on its switches for a couple years, but it was only available through a special research license. Saar Gillai, CTO of HP Networking, said his company is making OpenFlow generally available because HP customers are demanding it for use in production networks.
This position contrasts sharply with Cisco Systems’ apparent view of Openflow. At Cisco Live London this week, Cisco announced availability of new network virtualization technologies, including VXLAN support on its Nexus 1000v virtual switch and Easy Virtual Network (EVN), a WAN segmentation technology based on VRF-lite. But OpenFlow was not part of the discussion. In her keynote at Cisco Live, Cisco CTO Padmasree Warrior said her company wants to make software a core competency in 2012 and make networks more programmable, a key feature of software-defined networking. When asked where OpenFlow fits into this vision, Warrior said that software-defined networking is “broader than Openflow.” And Cisco VP of data center switching product management Ram Velaga said Openflow is “not production-ready.”
Gillai said HP is seeing a lot of interest in using OpenFlow in production networks. He said service providers are looking at using it to get more granular control of their networks. Enterprises want to use OpenFlow to make their data center networks more programmable. Particularly, enterprises that are using Hadoop to run large distributed applications with huge data sets are interested in using OpenFlow and software-defined networking for job distribution.
“We’ve spoken to customers who would like to set up a certain circuit with OpenFlow for the Hadoop shop to use at certain times of day,” Gillai said.
Of course, Warrior is right when she says software-defined networking is broader than Openflow. Arista Networks has been offering an open and non-proprietary approach to software-defined networking without OpenFlow for a couple of years.
But OpenFlow still has plenty of buzz, and big backers. HP’s support is significant, given its position as the number two networking vendor. And last week, IBM announced a joint solution with a new top-of-rack OpenFlow switch and NEC’s ProgrammableFlow controller.
IBM and HP. Those are two very big companies with a lot of customers.