In yesterday’s first quarter 2012 earnings call, Juniper revealed the first customers who have deployed a full QFabric system. According to a transcript of the earnings call via Seeking Alpha, Stephan Dyckerhoff, Juniper’s EVP, Platform Systems, said:
We now have over 150 customers for the QFX product line, and are seeing them embrace the solution in a variety of different configurations, ranging from top-of-rack installations to full fabric deployments.
We are pleased to have the first full fabric deployments running in live production. Those deployments include Qihoo 360 in China and Australia-based Oracle [Orica]. In Q1, we also had a QFabric win in Europe at Jan Yperman hospital in Belgium. Customer feedback overall is good, and we are encouraged with the pipeline we are building.
Juniper has caught some flack in the industry for the slow roll-out of its full data center fabric and a general lack of reference customers who have deployed a full fabric. Most initial customers of QFabric have been deploying the QFX3500 as a traditional top-of-rack switch. This device operates as a “node” in a full QFabric solution. It’s reasonable to assume that all of those QFX3500 customers are at least considering a full-fabric deployment, but it’s not guaranteed.
Juniper did put me on the phone with a healthcare-focused cloud provider (Codonis) a few months ago to discuss its plans for a full-fabric QFabric installation, but that implementation was mostly in the planning stage. Juniper has also announced that Deutsche Boers (operator of the Frankfurt Stock Exchange), Thomson Reuters, Bell Canada and Terra (Brazilian online media company) are all designing full-fabric deployments of QFabric, but none of those companies have announced whether they have put the system into production yet.
The lack of North American customers with production deployments is troubling. Juniper needs reference customers that U.S. companies can talk to. Qihoo 360 is a Chinese web security software developer. Will a stateside network architect be impressed by that reference? I doubt it. Jan Yyperman Hospital is a former Nortel Networks reference customer, so I’m assuming Juniper displaced a legacy Nortel network in its data center. That could be a promising reference when it eventually gets QFabric in production. Oracle Australia is a nice win. Will Oracle adopt the technology elsewhere? Network architects will want to know. Orica is an Australian chemical company. (A transcription error by Seeking Alpha suggested Oracle is a customer).
A Goldman Sachs analyst on the earnings call asked Juniper to specify how many of its 150 QFabric customers have deployed a full fabric. Juniper Dyckeroff declined to be specific:
…[W]e have a mix of deployments for the customers who have adopted the QFX product line. They range from top-of-rack to full fabric. The reason they adopt the product line is because we have strategic alignment with them on the architecture that they want to deploy going forward. And so the focus for us is to give them a great experience as they adopt the key pieces of technology and there’s a good number of them that actually adopt the full fabric…
Juniper’s slow roll-out of QFabric has been unfortunate, especially since much of the early hype surrounding the technology has been usurped by the rise of software-defined networking. The two technology’s aren’t necessarily interchangeable, but web-scale companies and cloud providers (a sweet spot for QFabric) are looking hard at software-defined networking, which has got to be a challenge for Juniper.]]>
But what about the technology strategy that is coalescing around Insieme and the other moves Cisco is making with software-defined networks? Shouldn’t the bigger concern be that Cisco might be making a strategic blunder?
Last week Cisco circulated an internal memo that confirmed for employees its $100-milllion investment in Insieme, the spin-in that will form part of Cisco’s “build, buy, partner” strategy for software-defined networking (SDN). The Cisco memo, published by Om Malik, claims that the networking industry hasn’t yet settled on a definition for SDN, let alone a value proposition:
Because SDN is still in its embryonic stage, a consensus has yet to be reached on its exact definition. Some equate SDN with OpenFlow or decoupling of control and data planes. Cisco’s view transcends this definition.
As Brad Casemore points out, here is Cisco’s opening salvo. It’s going to resist, or at least play down the value of, one of the core attributes of software-defined networking: the decoupling of the control and data planes. There is a little bit of cognitive dissonance with this statement. This decoupling of the control and data planes is an essential foundation of SDN. It enables centralized, flow-based networking. It enables programmability. It enables organizations to deploy third-party applications on a network through an SDN controller. But Cisco claims to transcend this idea. This vague dismissal should be troubling to SDN proponents.
The memo goes on to quote Cisco CTO Padmasree Warrior to support this notion:
“If you ask five customers what SDN means to them, you may get five different answers. Customer motivations and expectations are different based on their business problem or deployment scenario,” Warrior says.
It’s true that some people new to the subject initially perceived OpenFlow as an architecture, rather than just a protocol that enables SDN. Once they get educated on the subject, few networking pros express much confusion on the matter. However, is Cisco’s view really transcending the current SDN definition. This memo muddies the waters a bit by claiming that Cisco’s Nexus 1000v virtual switch is an example of SDN?
While SDN concepts like network virtualization may sound new, Cisco has played a leadership role in this market for many years leveraging its build, buy, partner strategy. For example, Cisco’s Nexus 1000V series switches—which provide sophisticated NX-OS networking capabilities in virtualized environment down to the virtual machine level—are built upon a controller/agent architecture, a fundamental building block of SDN solutions. With more than 5,000 customers today, Cisco has been shipping this technology for a long time.
Sure, the Nexus 1000v introduces a version of SDN to the extreme edge of a virtualized data center, but it doesn’t come close to achieving the network agility and programmability promised by software-defined networks enabled, or not enabled, by OpenFlow. What about the rest of the data center LAN, filled with physical switches that are so constrained that both the IETF and the IEEE are re-engineering Ethernet in order to eliminate a legacy protocol like spanning tree?
Proponents say that SDN has the potential to eliminate spanning tree by defining flow routes centrally in a server-based controller, thus eliminating the risk of loops. Why upgrade to Shortest Path Bridging (SPB) or Transparent Interconnections of Lots of Links (TRILL), when an SDN network that can do it? If you want to use TRILL or SPB in your data center network today, you need to upgrade to the newest generation of your vendor’s switches, and you won’t be able to reverse course midway through. These vendors won’t play together. You can’t mix Brocade’s iteration of TRILL with Cisco’s. You can’t mix Avaya’s iteration of SPB with Cisco or Brocade. You probably wouldn’t want to mix vendors in your data center, but you also want investment protection with these new data center fabrics, don’t you? Five years from now when you need to refresh the server access layer, you’re locked into whatever vendor you’ve chosen.
You can ditch spanning tree in an OpenFlow-based SDN network using any combination of switches that support OpenFlow. Heck, Nicira Networks claims its product can get you there without even using OpenFlow switches. Just leave your legacy network in place. You know who is using OpenFlow? Google. You know who is using Nicira? eBay. Fidelity. Rackspace. NTT. Concerns about scalability with SDN may be justified, but some heavyweight companies have put it into production.
But never mind that for now. The Cisco memo expounds on the virtues of open, programmable networks (something that Arista Networks has offered for a couple years now). Toward the end, the memo lifts the veil off of Cisco’s SDN approach.
“Our strategy is to continue to offer choices to our customers so that they are not forced to go down a single path,” Warrior says. “We have a multipronged approach that goes beyond current perceptions of SDN, leveraging business-based use cases as building blocks so that we achieve architectural consistency and bring to bear the richness of all our capabilities.”
Warrior adds that Cisco already builds a lot of intelligence into its network silicon and software. Making them open and programmable will further unlock the value, while enabling further application awareness.
I will give Cisco credit here. the industry needs more “business-based use cases” for SDN. Midsized enterprises and even many large enterprises do not need SDN today. The networking pros at these smaller companies who ask me about SDN are interested in the technology, but mostly they just want to stay current with technology. They don’t need it. Today the emerging SDN market is focused on serving the needs of larger enterprises and web-scale companies. Broader business cases for the technology are years away. Many SDN start-ups are focusing on cloud providers and web giants rather than enterprises.
However, the mention of network silicon above (translation: ASICs) worries me. Here we have Cisco saying that it will make its ASICs and its software (IOS, NX-OS) open and programmable. Just how open and programmable will Cisco’s technology be? Look at this job posting for a software engineer at Cisco (It may not last long. It’s been scrubbed of certain details since I first reported its existence a couple weeks ago. This and another job posting (which disappeared from Cisco’s website a few days ago) made many references to a ConnectedApps team that is developing APIs for a software development kit (SDK) that will open up Cisco’s technology to third-party developers as part of a SDN initiative.
Just how open and programmable will an initiative based on APIs be? This doesn’t sound like an API for OpenFlow. It sounds like something else, given Cisco’s downplay of OpenFlow. APIs are a way to allow third party developers to hook their software to another vendor’s proprietary software. There’s nothing particularly open about it. SDN is about more than hooking third-party software to the edge of Cisco’s black box, whether that black box is in the form of software or an ASIC. SDN is what it is: Networks defined by software rather than hardware. How do you do that? By opening up the black box of networks and letting engineers build their networks in new ways. There is a control plane and there is a data plane. SDN decouples them and opens up the network to a whole new world of possibilities. It’s as simple as that.
In a few years, more IT organizations will want an open, software-defined network. Cisco needs to find a way to be relevant in such a world. APIs won’t cut it.]]>
Most of the SDN buzz centers on Layer 2/3 networking. That’s what is dominating the agenda at this week’s Open Networking Summit. However, a smaller group of start-ups are focusing on SDN at Layer 4-7.
Today network engineers virtualize Layer 4-7 services by deploying software images of leading network appliance vendors on x86 server hardware. These software images, often labeled as virtual appliances, are available from several WAN optimization and application delivery controllers vendors, for instance.
Enterprises achieve scale with this approach by adding more virtual appliance images. However, bottlenecks will remain inevitable.
“The real problem here is the operating system itself,” said Steve Georgis, CEO of LineRate Systems, a new start-up that specializes in virtual Layer 4-7 services. “Linux was designed to be a general purpose OS, not a network OS. The network stack spends a lot of time managing network connections. Every time you add a network connection, the amount of time that stack spends on managing connections grows and it can spend less and less time managing the actual packets. As you scale up to the thousands of simultaneous connections, the operating system is left with very little time to do any real work. You run into pretty dramatic bottlenecks and throughput falls off quickly.”
Some enterprises will eliminate these bottlenecks by attaching a network acceleration module to a server to offload some of the processes that can overwhelm a server’s CPU, like TCP termination on an application delivery controller. Unfortunately, once you add these modules, you are pretty limited in how you deploy Layer 4-7 services. You can ‘t stand-up a new application delivery controller just anywhere. You have to put it on a server with the module.
LineRate Systems emerged from stealth mode today with a new acronym: SDNS (Software-defined network services). Its technology, the LineRate Operating System (LROS), is a re-engineered network stack for a Linux kernel that enables wire-speed throughput on a Linux server. Georgis claims that this can deliver 20 to 40 Gbps of network processing capability on a commodity x86 server with extremely high session scalability (hundreds of thousands of full-proxy Layer 7 connections per second and more than 2 million concurrent active flows).
LineRate has done some additional software engineering under the hood, including some work to eliminate blocking among cores within a multi-core CPU.
On top of this LROS, LineRate is offering LineRate Proxy, a product that operates as a full proxy for Layer 4-7 services on commodity server hardware. It includes several features: load balancing, content switching and filtering, SSL termination/origination, ACL and IP filtering, TCP optimization, DDoS blocking and an IPv4/IPv6 translation gateway.
Georgis said LineRate will develop more functionality in security, network monitoring, and Layer 7 switching in the future. The company is initially targeting cloud providers, but it expects to develop an enterprise market, particularly among companies that are building private clouds.]]>
Though Palo Alto has not yet turned a profit (the company reported a loss last year of $12.5 million), it more than doubled revenue to $119.6 million in 2011 from $48.8 million in 2010.
Many believe that growth came from customers that couldn’t find comparable features in Cisco and Juniper products and jumped ship. Blogger Brad Reese points out that while Palo Alto’s revenues soared +141% in the six months ending January 31, Cisco saw a revenue increase in the same time period of only +7.7%.
If Palo Alto’s gains have been direct losses for Cisco and Juniper, things only stand to get worse if Palo Alto goes public. After all, many large enterprises are hesitant to invest millions in a company that isn’t public and financially stable.
“No one is going to spend $20 million on a product from a company that isn’t public,” said one engineer at a multinational consulting firm, who recently made an initial Palo Alto test investment. “When I went to do a first pass [on buying firewalls], it was a half million bucks. It’s a big commitment to change a firewall product. You’re signing on for a long-term relationship with subscription services in addition.”
Even more threatening to Cisco and Juniper is that this engineer – like others – have found Palo Alto technology superior to the competition.
“When I told the the other vendor that I wanted IDS, antivirus and content inspection, they looked at me like I had three heads. When I said that to Palo Alto, they said, ‘of course you would do that, why wouldn’t you?’” he said. “If you look at performance statistsics on a box from another vendor, they tell you what the performance is on a per-service basis, but they don’t tell you what happens when you turn all services on. That’s not the case with Palo Alto.”
That’s likely because Palo Alto has created next-generation, application-aware firewalls from the jump — never having to adapt legacy technology to do new tricks. The company was founded in 2005 by Nir Zuk, who had been CTO at NetScreen before it was acquired by Juniper. As some tell it, Zuk went to the Juniper board with the message that firewalls had to become application-aware. Juniper eventually followed that advice, but not soon enough for Zuk, who founded a company based on the idea that next generation firewalls should offer application-level monitoring with transaction detail and constantly updated signatures. Since then, Gartner has dubbed next-generation firewalls as mainstream and Cisco recently announced the launch of an application-aware firewall. Juniper has also announced similar features. It remains to be seen whether the more established vendors can catch up.]]>