Many of the bloggers who have analyzed and reported on the news of Insieme, Cisco’s latest spin-in, have talked about how the company’s formation is a morale-killer for Cisco employees. The concern is justified. Cisco’s spin-in approach enriches a select number of employees recruited to join the likes of Insieme and other past spin-ins like Nuova Systems, leaving engineers who work on products like the Catalyst line or the Aironet products to wonder when their big payday will come. That can lead to a brain drain as engineers bolt for a company that gives them a better opportunity to shine.
But what about the technology strategy that is coalescing around Insieme and the other moves Cisco is making with software-defined networks? Shouldn’t the bigger concern be that Cisco might be making a strategic blunder?
Last week Cisco circulated an internal memo that confirmed for employees its $100-milllion investment in Insieme, the spin-in that will form part of Cisco’s “build, buy, partner” strategy for software-defined networking (SDN). The Cisco memo, published by Om Malik, claims that the networking industry hasn’t yet settled on a definition for SDN, let alone a value proposition:
Because SDN is still in its embryonic stage, a consensus has yet to be reached on its exact definition. Some equate SDN with OpenFlow or decoupling of control and data planes. Cisco’s view transcends this definition.
As Brad Casemore points out, here is Cisco’s opening salvo. It’s going to resist, or at least play down the value of, one of the core attributes of software-defined networking: the decoupling of the control and data planes. There is a little bit of cognitive dissonance with this statement. This decoupling of the control and data planes is an essential foundation of SDN. It enables centralized, flow-based networking. It enables programmability. It enables organizations to deploy third-party applications on a network through an SDN controller. But Cisco claims to transcend this idea. This vague dismissal should be troubling to SDN proponents.
The memo goes on to quote Cisco CTO Padmasree Warrior to support this notion:
“If you ask five customers what SDN means to them, you may get five different answers. Customer motivations and expectations are different based on their business problem or deployment scenario,” Warrior says.
It’s true that some people new to the subject initially perceived OpenFlow as an architecture, rather than just a protocol that enables SDN. Once they get educated on the subject, few networking pros express much confusion on the matter. However, is Cisco’s view really transcending the current SDN definition. This memo muddies the waters a bit by claiming that Cisco’s Nexus 1000v virtual switch is an example of SDN?
While SDN concepts like network virtualization may sound new, Cisco has played a leadership role in this market for many years leveraging its build, buy, partner strategy. For example, Cisco’s Nexus 1000V series switches—which provide sophisticated NX-OS networking capabilities in virtualized environment down to the virtual machine level—are built upon a controller/agent architecture, a fundamental building block of SDN solutions. With more than 5,000 customers today, Cisco has been shipping this technology for a long time.
Sure, the Nexus 1000v introduces a version of SDN to the extreme edge of a virtualized data center, but it doesn’t come close to achieving the network agility and programmability promised by software-defined networks enabled, or not enabled, by OpenFlow. What about the rest of the data center LAN, filled with physical switches that are so constrained that both the IETF and the IEEE are re-engineering Ethernet in order to eliminate a legacy protocol like spanning tree?
Proponents say that SDN has the potential to eliminate spanning tree by defining flow routes centrally in a server-based controller, thus eliminating the risk of loops. Why upgrade to Shortest Path Bridging (SPB) or Transparent Interconnections of Lots of Links (TRILL), when an SDN network that can do it? If you want to use TRILL or SPB in your data center network today, you need to upgrade to the newest generation of your vendor’s switches, and you won’t be able to reverse course midway through. These vendors won’t play together. You can’t mix Brocade’s iteration of TRILL with Cisco’s. You can’t mix Avaya’s iteration of SPB with Cisco or Brocade. You probably wouldn’t want to mix vendors in your data center, but you also want investment protection with these new data center fabrics, don’t you? Five years from now when you need to refresh the server access layer, you’re locked into whatever vendor you’ve chosen.
You can ditch spanning tree in an OpenFlow-based SDN network using any combination of switches that support OpenFlow. Heck, Nicira Networks claims its product can get you there without even using OpenFlow switches. Just leave your legacy network in place. You know who is using OpenFlow? Google. You know who is using Nicira? eBay. Fidelity. Rackspace. NTT. Concerns about scalability with SDN may be justified, but some heavyweight companies have put it into production.
But never mind that for now. The Cisco memo expounds on the virtues of open, programmable networks (something that Arista Networks has offered for a couple years now). Toward the end, the memo lifts the veil off of Cisco’s SDN approach.
“Our strategy is to continue to offer choices to our customers so that they are not forced to go down a single path,” Warrior says. “We have a multipronged approach that goes beyond current perceptions of SDN, leveraging business-based use cases as building blocks so that we achieve architectural consistency and bring to bear the richness of all our capabilities.”
Warrior adds that Cisco already builds a lot of intelligence into its network silicon and software. Making them open and programmable will further unlock the value, while enabling further application awareness.
I will give Cisco credit here. the industry needs more “business-based use cases” for SDN. Midsized enterprises and even many large enterprises do not need SDN today. The networking pros at these smaller companies who ask me about SDN are interested in the technology, but mostly they just want to stay current with technology. They don’t need it. Today the emerging SDN market is focused on serving the needs of larger enterprises and web-scale companies. Broader business cases for the technology are years away. Many SDN start-ups are focusing on cloud providers and web giants rather than enterprises.
However, the mention of network silicon above (translation: ASICs) worries me. Here we have Cisco saying that it will make its ASICs and its software (IOS, NX-OS) open and programmable. Just how open and programmable will Cisco’s technology be? Look at this job posting for a software engineer at Cisco (It may not last long. It’s been scrubbed of certain details since I first reported its existence a couple weeks ago. This and another job posting (which disappeared from Cisco’s website a few days ago) made many references to a ConnectedApps team that is developing APIs for a software development kit (SDK) that will open up Cisco’s technology to third-party developers as part of a SDN initiative.
Just how open and programmable will an initiative based on APIs be? This doesn’t sound like an API for OpenFlow. It sounds like something else, given Cisco’s downplay of OpenFlow. APIs are a way to allow third party developers to hook their software to another vendor’s proprietary software. There’s nothing particularly open about it. SDN is about more than hooking third-party software to the edge of Cisco’s black box, whether that black box is in the form of software or an ASIC. SDN is what it is: Networks defined by software rather than hardware. How do you do that? By opening up the black box of networks and letting engineers build their networks in new ways. There is a control plane and there is a data plane. SDN decouples them and opens up the network to a whole new world of possibilities. It’s as simple as that.
In a few years, more IT organizations will want an open, software-defined network. Cisco needs to find a way to be relevant in such a world. APIs won’t cut it.
Keeping up with emerging start-ups in the software-defined networking (SDN) market is becoming a full-time job.
Most of the SDN buzz centers on Layer 2/3 networking. That’s what is dominating the agenda at this week’s Open Networking Summit. However, a smaller group of start-ups are focusing on SDN at Layer 4-7.
Today network engineers virtualize Layer 4-7 services by deploying software images of leading network appliance vendors on x86 server hardware. These software images, often labeled as virtual appliances, are available from several WAN optimization and application delivery controllers vendors, for instance.
Enterprises achieve scale with this approach by adding more virtual appliance images. However, bottlenecks will remain inevitable.
“The real problem here is the operating system itself,” said Steve Georgis, CEO of LineRate Systems, a new start-up that specializes in virtual Layer 4-7 services. “Linux was designed to be a general purpose OS, not a network OS. The network stack spends a lot of time managing network connections. Every time you add a network connection, the amount of time that stack spends on managing connections grows and it can spend less and less time managing the actual packets. As you scale up to the thousands of simultaneous connections, the operating system is left with very little time to do any real work. You run into pretty dramatic bottlenecks and throughput falls off quickly.”
Some enterprises will eliminate these bottlenecks by attaching a network acceleration module to a server to offload some of the processes that can overwhelm a server’s CPU, like TCP termination on an application delivery controller. Unfortunately, once you add these modules, you are pretty limited in how you deploy Layer 4-7 services. You can ‘t stand-up a new application delivery controller just anywhere. You have to put it on a server with the module.
LineRate Systems emerged from stealth mode today with a new acronym: SDNS (Software-defined network services). Its technology, the LineRate Operating System (LROS), is a re-engineered network stack for a Linux kernel that enables wire-speed throughput on a Linux server. Georgis claims that this can deliver 20 to 40 Gbps of network processing capability on a commodity x86 server with extremely high session scalability (hundreds of thousands of full-proxy Layer 7 connections per second and more than 2 million concurrent active flows).
LineRate has done some additional software engineering under the hood, including some work to eliminate blocking among cores within a multi-core CPU.
On top of this LROS, LineRate is offering LineRate Proxy, a product that operates as a full proxy for Layer 4-7 services on commodity server hardware. It includes several features: load balancing, content switching and filtering, SSL termination/origination, ACL and IP filtering, TCP optimization, DDoS blocking and an IPv4/IPv6 translation gateway.
Georgis said LineRate will develop more functionality in security, network monitoring, and Layer 7 switching in the future. The company is initially targeting cloud providers, but it expects to develop an enterprise market, particularly among companies that are building private clouds.
Application-aware firewall vendor Palo Alto Networks has filed for an IPO that could signal big competitive trouble for Cisco and Juniper Networks.
Though Palo Alto has not yet turned a profit (the company reported a loss last year of $12.5 million), it more than doubled revenue to $119.6 million in 2011 from $48.8 million in 2010.
Many believe that growth came from customers that couldn’t find comparable features in Cisco and Juniper products and jumped ship. Blogger Brad Reese points out that while Palo Alto’s revenues soared +141% in the six months ending January 31, Cisco saw a revenue increase in the same time period of only +7.7%.
If Palo Alto’s gains have been direct losses for Cisco and Juniper, things only stand to get worse if Palo Alto goes public. After all, many large enterprises are hesitant to invest millions in a company that isn’t public and financially stable.
“No one is going to spend $20 million on a product from a company that isn’t public,” said one engineer at a multinational consulting firm, who recently made an initial Palo Alto test investment. “When I went to do a first pass [on buying firewalls], it was a half million bucks. It’s a big commitment to change a firewall product. You’re signing on for a long-term relationship with subscription services in addition.”
Even more threatening to Cisco and Juniper is that this engineer – like others – have found Palo Alto technology superior to the competition.
“When I told the the other vendor that I wanted IDS, antivirus and content inspection, they looked at me like I had three heads. When I said that to Palo Alto, they said, ‘of course you would do that, why wouldn’t you?’” he said. “If you look at performance statistsics on a box from another vendor, they tell you what the performance is on a per-service basis, but they don’t tell you what happens when you turn all services on. That’s not the case with Palo Alto.”
That’s likely because Palo Alto has created next-generation, application-aware firewalls from the jump — never having to adapt legacy technology to do new tricks. The company was founded in 2005 by Nir Zuk, who had been CTO at NetScreen before it was acquired by Juniper. As some tell it, Zuk went to the Juniper board with the message that firewalls had to become application-aware. Juniper eventually followed that advice, but not soon enough for Zuk, who founded a company based on the idea that next generation firewalls should offer application-level monitoring with transaction detail and constantly updated signatures. Since then, Gartner has dubbed next-generation firewalls as mainstream and Cisco recently announced the launch of an application-aware firewall. Juniper has also announced similar features. It remains to be seen whether the more established vendors can catch up.
I think it’s safe to say that Cisco’s answer to software-defined networking and OpenFlow is starting to take shape.
Om Malik reports that Insieme (or Insiemi, depending on whom you talk to), a mysterious spin-in subsidiary of Cisco Systems, is aggressively recruiting engineering talent from hot start-ups, including Arista Networks, Big Switch Networks and Nicira Networks. Om says that Cisco/Insieme apparently tried and failed to poach from Arista, but it succeeded in grabbing four executives from Nicira and one from Big Switch, two companies that are major names in the emerging software-defined networking (SDN) and network virtualization space.
As Brad Casemore’s excellent blog from a couple weeks ago points out, Cisco has a history of propping up spin-in start-ups, which use Cisco cash (and Cisco employees) to build new technology outside its traditional bureaucratic product development structure. Cisco usually maintains a majority stake in the spin-in, with an option to buy the rest. When the spin-in company has something fully-baked, Cisco buys the entire company and welcomes back its former employees.
The last Cisco spin-in we saw was Nuova Systems. In 2006, Cisco staked about $70 million to start the company, then two years later bought out the minority owners (including several Cisco veterans who are reportedly also involved in starting Insieme). Nuova essentially built the first Nexus 5000 switch. As a subsidiary of Cisco, Nuova collaborated closely with its parent company so that the Nexus 5000 would be compatible with the Nexus 7000 series of switches, which were introduced shortly before Cisco spun Nuova back into the company.
Cisco/Insieme is reportedly dangling million of dollars to snatch talent from Nicira and Big Switch. By targeting these companies, Cisco is making it pretty obvious what Insieme is up to, even though it has been coy about its plans for SDN thus far. When I asked Cisco CEO John Chambers about software-defined networking last month, he said:
We absolutely view [software-defined networking] as a viable option for the future, either as a total architecture or segments of it. We probably spend a couple hours a week focused on our strategy in this market through a combination of internal development, partnering and acquisition. If we do our job right you’ll see us move on multiple fronts here. And at the right time, when it is a benefit to our customers, we will outline our strategy for them.
He added that any SDN solution from Cisco would be heavily tied to Cisco’s strategy to differentiate with its application-specific integrated circuits (ASICs), the custom network silicon it builds into most of its switches.
So here we have a company founded by Cisco’s hardware-centric veterans snatching up software ninjas from SDN vendors. What could they be up to? Do you have any ideas? Let us know in comment below.
Last summer, CompTIA promised to deliver 5,000 vouchers (valued at $400,000) to WWP, which in turn is offering career transition training courses to wounded U.S. service members at eight health care facilities and military bases across the country.
In 2011, 550 veterans sat for CompTIA exams through WWP with a 93% pass rate. They achieved certifications in CompTIA A+, Network+ and Security+. The partnership between WWP and CompTIA will continue through 2014.
Many of these newly certified veterans will go on to new careers in the private sector while others will take on new IT roles within the military
The BlackDiamond X8, which Extreme Networks first announced last spring, is now shipping. If you are struggling with density and oversubscription problems in your data center core, this chassis might solve it. The BDX8 packs 768×10 (Gigabit Ethernet) GbE ports or 192×40 GbE ports into a third of rack, at wire-speed.
In other words, you can pack 2,304 ports of 10 GbE into a single rack containing three BDX8 chassis. Other vendors might offer somewhat comparable density but only in an oversubscribed configuration. To get this amount of wire-speed 10 GbE ports with Cisco Nexus 7010 switches, you would need to fill six racks with 12 chassis. That’s a lot of capital expense, a lot of switches to manage and a lot of real estate in a data center. Also, note that the Nexus 7010 is a half-rack chassis. The 18-slot Nexus 7018 has higher density (768 wirespeed 10 GbE ports) but you’re only going to squeeze one of those into a single rack.
Extreme disclosed that Microsoft has been beta testing the chassis in the data center for its Executive Briefing Center in Redmond.
Prior to this release, Arista Networks’ 7508 chassis had the most impressive wire-speed port density (albeit only with 10 GbE). Arista’s 7508 packs 384 wire-speed 10 GbE ports into a chassis that is only 11 rack units with 8 I/O slots.
Then you have some of the newer data center fabrics, like Juniper’s QFabric, which you can’t really compare to the BDX8 from a pure speeds and feeds perspective, since Juniper positions an entire QFabric deployment as a single, logical switch chassis that has been exploded into scores of individual devices.
Not a lot of people need port density like this yet, let alone this kind of 40 GbE port density. And if they do need this kind of density, chances are they can live with oversubscription. While Extreme has a flashy flagship switch to show off, a lot of enterprises will be looking at Extreme’s overall network architecture, rather than just the impressive density. Cloud providers, high performance computing environments and financial services companies will give it a look. It leaves me wondering what will come next from competitors like Cisco, Dell-Force10, etc.
I recently visited the University of New Hampshire InterOperability Laboratory (UNH-IOL) for a behind-the-scenes tour. This 32,000-square-foot facility is the place where networking vendors large and small go in order to certify that their technologies interoperate with each other. The lab has more than 20 different testing programs which produce independent results via collaboration with networking and storage vendors. You name it, they test it: Etherent, IPv6, data center bridging, Fibre Channel, SATA, Wi-FI.
UNH-IOL is a neutral, third-party testing lab, staffed by engineers and UNH students. Each testing program the lab has in place corresponds to a consortium of vendors who support the lab’s activities in exchange for certification that their products interoperate with standards and with other vendors’ products.
It’s not just hardcore, behind-closed-door tests at UNH-IOL. The lab will also host multiple events throughout the year, such as “Plugfests,” group-testing events where multiple vendors will get together in a room full of tables and cables and test their equipment against each other for interoperability according to a specific testing plan. These plugfests consist of a week of 12-hour days of testing. In a world where vendors like to cut each other down in the press, it’s nice to think of them all sitting in a room together for hours at a time, sweating over who interoperates best with the competition.
All boxes big and small
It’s not just the big enterprise and service provider gear that gets tested here. Home routers, hard drives and 3G dongles are tested, too.
In the IPv6 interoperability lab space, UNH-IOL staff are testing for compatibility and interoperability among anything that is running IPv6. That includes PCs, dongles, printers, embedded operating systems, routers and switches. To the left you can see Cisco’s CRS-1router, which typically sits in a service provider core, being tested for IPv6 interoperability. And further up the rack you can see a tiny little Linksys E4200 home router that is also being tested at the lab.
“When building products, IPv6 can be complicated,” said Tim Winters, manager of IOL’s IPv6 testing lab. “These [vendors] need to talk to each other.”
The IPv6 testing lab at UNH-IOL has played an important role for vendors in recent years, especially those who are suppliers to the federal government, which has been extremely aggressive with deadlines for running native IPv6 on federal agencies’ networks.
The Racks of the living dead
Ethernet has been commercially available for more than 30 years now. In that time a lot of Ethernet vendors have come and gone, but a lot of their switches remain behind. Ethernet is still Ethernet. A vendor’s disappearance from the market doesn’t necessarily mean that it will disappear from every network. Somewhere out there, Compaq Fast Ethernet switches still lurk. And some network engineer, somewhere, is probably still proudly maintaining a few Bay Networks switches in a wiring closet.
So here’s the thing. if you’re a vendor building a state-of-the-art switch, you still need to make sure it will interoperate with everything out there. You never know what zombie vendors lurk in your customer’s closets. For this reason, UNH-IOL maintains what I like to call “racks of the living dead,” old Ethernet switches that its Ethernet testing lab uses to test for interoperabiilty with everything. Above to the right you can see just a portion of one of the racks of old switches the UNH-IOL maintains, filled with several Bay Networks switches and a Nortel switch. To the right you’ll see a single Fast Ethernet HP ProCurve switch. Some of the vendors in this rack I had never heard of.
Jeff Lapak, manager of testing for 10, 40 and 100 Gigabit Ethernet (GbE) said UNH-IOL tests for more than just Ethernet interoperability in switches. His team will check voltages on individual ports or the PHY (physical layer) coding on each switch.
What’s next? Gigabit Wi-Fi, OpenFlow testing?
UNH-IOL engineers are always keeping an eye on the latest advances in the networking industry, developing testing programs for new technologies as products start to hit the market.
One trendy technology that could find its way into the UNH-IOL labs soon is OpenFlow, the open source protocol for software-defined networking that has enjoyed a lot of hype recently.
Winters said OpenFlow is still in it’s early stages and interoperabiity testing is not as important to vendors right now. Most early solutions are somewhat proprietary. he predicted that when enterprises start buying second-generation OpenFlow products, then they will start demanding interoperability from their vendors. That’s when testing will kick in.
Gigabit Wi-Fi? The same thing, Mikkel Hagen, who manages testing for Wi-Fi, among other technologies, said the next-generation Wi-Fi technologies which are just starting to come to market, are too new for UNH-IOL to have a testing program in place for it. Just one vendor has even chatted with the lab’s staff about the technology. And no enterprise-grade products are expected until the second half of 2012 or early 2013.
HP Networking announced today that OpenFlow support is generally available on 16 different switches within it’s HP 3500, 5400 and 8200 Series.
Openflow, an open source protocol, enables software-defined networking by allowing a server-based controller to abstract and centralize the control plane of switches and routers. It has enjoyed plenty of buzz since Interop Las Vegas last year.
HP has had an experimental version of OpenFlow support on its switches for a couple years, but it was only available through a special research license. Saar Gillai, CTO of HP Networking, said his company is making OpenFlow generally available because HP customers are demanding it for use in production networks.
This position contrasts sharply with Cisco Systems’ apparent view of Openflow. At Cisco Live London this week, Cisco announced availability of new network virtualization technologies, including VXLAN support on its Nexus 1000v virtual switch and Easy Virtual Network (EVN), a WAN segmentation technology based on VRF-lite. But OpenFlow was not part of the discussion. In her keynote at Cisco Live, Cisco CTO Padmasree Warrior said her company wants to make software a core competency in 2012 and make networks more programmable, a key feature of software-defined networking. When asked where OpenFlow fits into this vision, Warrior said that software-defined networking is “broader than Openflow.” And Cisco VP of data center switching product management Ram Velaga said Openflow is “not production-ready.”
Gillai said HP is seeing a lot of interest in using OpenFlow in production networks. He said service providers are looking at using it to get more granular control of their networks. Enterprises want to use OpenFlow to make their data center networks more programmable. Particularly, enterprises that are using Hadoop to run large distributed applications with huge data sets are interested in using OpenFlow and software-defined networking for job distribution.
“We’ve spoken to customers who would like to set up a certain circuit with OpenFlow for the Hadoop shop to use at certain times of day,” Gillai said.
Of course, Warrior is right when she says software-defined networking is broader than Openflow. Arista Networks has been offering an open and non-proprietary approach to software-defined networking without OpenFlow for a couple of years.
But OpenFlow still has plenty of buzz, and big backers. HP’s support is significant, given its position as the number two networking vendor. And last week, IBM announced a joint solution with a new top-of-rack OpenFlow switch and NEC’s ProgrammableFlow controller.
IBM and HP. Those are two very big companies with a lot of customers.
When start-up Meraki first hit the scene a few years ago, it was known as the cloud-based wireless LAN vendor, yet another player in a very crowded market. Today it’s repositioning itself as a cloud-based networking vendor, with an expanded portfolio aimed at competing directly with Cisco Systems.
“The dominant competitor we’re going after across all our products is Cisco,” said Kiren Sekar, vice president of marketing at Meraki.
Originally a pure WLAN player
Meraki first offered a unique solution: A wireless LAN that required only access points, but no central controller appliance. Instead, the access points would go to a Meraki cloud for control and management. Meraki’s cloud interface offers administrators configuration management, automated firmware upgrades, and global visibility into the managed devices.
The vendor has done pretty well in a booming wireless LAN market, listing Burger King, Applebee’s, and the University of Virgina as customers. Meraki’s approach offers low-cost network operations, since its cloud-based management interface is aimed at serving general IT administrators rather than experienced network engineers.
Now routers and access switches
Last year Meraki introduced a small line of branch router-firewalls, its MX series. Like it’s wireless line, the Meraki MX routers are managed through the cloud. Again, the cloud approach offers global views of ports across multiple sites, configuration management, alerting and diagnostics, and automated firmware upgrades. The firewall functionality also included application layer inspection, a key feature of next-generation firewalls.
This month, Meraki expanded its portfolio even further, adding MX boxes capable of connecting enterprise campuses and data centers. The routers feature two-click, site-to-site VPN capabilities and WAN optimization features such as HTTP, FTP and TCP acceleration, caching, deduplication and compression.
Also, Meraki launched a new MS series of Layer 2/3 access switches, including 24-port Gigabit Ethernet model and a 48-port 1/10 Gigabit Ethernet model, with or without Power over Ethernet (PoE). Again, these MS switches are managed through the Meraki cloud. The switches are obviously designed to compete head-to-head with the Catalyst 3750 series of switches from Cisco. These MS switches start at a list price of $1,199 for the 24-port, non-PoE switch. Combine that with ongoing licensing for the cloud-management support, and the total cost of ownership on the basic switch is about $1,400 over three years.
If a low cost of ownership value proposition on switching and routing (and WLAN) is important to you, Meraki can make a compelling case. However, the low-TCO sales pitch is starting to wear thin according to a lot of the experts I talk to. Networks are getting more complex, not simpler. Low-cost doesn’t ring bells in every IT department.
That’s why Meraki offers home-grown, advanced network services for no additional cost on its boxes. The MX router-firewalls come with WAN optimization features bundled in. Other vendors would require a license upgrade (or a separate appliance). They feature application-aware inspection and policy enforcement, something that usually requires a separate vendor. I can’t vouch for how these Meraki features compare to the WAN optimization capabilities of Riverbed Technology or the next-generation firewall capabilities of Palo Alto Networks and Check Point Software. But Meraki isn’t interested in competing with Riverbed, Palo Alto or Check Point. It’s going after Cisco.
“We view WAN acceleration as a way to differentiate ourselves from Cisco as opposed to a way to compete with Riverbed,” Meraki’s Sekar said. “For every company that has Riverbed, there are 10 who don’t, because they can’t absorb the cost or the complexity. But everyone needs a firewall.”
Is a low-cost, easily managed networking vendor something you’re looking for? Or do you still prefer to go for the higher-end products from your established vendors? Let us know.
In the emerging software-defined networking (SDN) market, where the OpenFlow protocol has generated a lot of hype, Big Switch is a prominent start-up. In an SDN network built with OpenFlow, the control plane of the switches and routers are abstracted into a centralized, server-based controller which defines flows for data forwarding based on a centralized view of the network topology. Big Switch, which hasn’t offered many details about the products it has in beta today, is presumed to be working on a commercial OpenFlow controller.
Why offer an open source version of the product?
“We see [software-defined networking] as a three-tier thing,” said Kyle Forster, Big Switch co-founder and vice president of sales and marketing. “At the bottom you have the data plane, the Ethernet switches and routers. The middle-tier is the controller, which is where Floodlight fits. The third tier is a set of applications on top of that controller. We play commercially in that application tier.”
In other words, when Big Switch starts shipping products, it will offer an OpenFlow controller, based on the open source Floodlight, with bundles of applications for running a software-defined network. That’s where the money will be made.
The applications that Big Switch and third-party developers can build on top of an OpenFlow controller can range from the rudimentary applications like multi-switch forwarding models and topology discovery to more advanced services, such as load balancing and firewalls.
Big Switch’s goal with the open source release is to get the code out into the public domain.
“By open sourcing that, you get two things. You get high quality code because it’s visible to everybody,” Forster said. “You also get a vast amount of community members downloading the thing and playing around with it. So it gets hardened very rapidly. It’s also useful for our partners. If a partner is going to build an application on top of our commercial controller, they want peace of mind that if they no longer want a commercial relationship with Big Switch, they have the opportunity to go down the open source path.”