When Cisco introduced its Open Network Environment at Cisco Live last week, Cisco executives spoke about the importance of northbound APIs, leveraging the intelligence of switches and routers to supply information to the orchestration layer so that it can make better decisions about how to program the network.
Now Plexxi Inc., a stealthy, software-defined networking start-up, is emerging with a similar message. Plexxi is being vague on the details of its technology but it’s lifted the curtain on some of its marketing message this week. It’s calling its approach to software-defined networking Affinity-Driven Networking.
Plexxi is also hinting that its software-defined networking technology extends to the physical layer of an optical network, allowing a network manager to provision bandwidth by manipulating the wavelengths of optics.
Mat Mathews, co-founder and vice president of product management at Plexxi, said his company is starting a limited private beta with select customers who have been working closely with Plexxi on the development of its products. Mathews describes Affinity-Driven Networking as an integrated software and hardware solution consisting of a a top-of-rack network switch and a software-based controller. Although Mathews describes the product as a software-defined networking (SDN) solution, he said Plexxi is not using OpenFlow.
Plexxi customers will be able to build out an entire data center network using only the company’s top-of-rack switches and its software, although customers will have the option of integrating a network domain built with Plexxi technology into an existing legacy ntwork.
Mathews said Plexxi is looking beyond the decoupling of the control and forwarding planes that other SDN vendors champion in order to focus on a neglected issue in the data center.
“Networks are fundamentally disconnected from applications and application workloads,” he said. “We can use tools and software to understand application workloads in a data center. The first part of Affinity Networking is getting an understanding of those workloads and what they require. The second part is building a physical network that can implement these workloads exactly with the direct requirements we understand from either the orchestration tools, the applications themselves or the data center operator.”
So just how does Plexxi deliver a network that is tailored to the needs of application workloads? The company isn’t sharing too much information on that, at least until its products become generally available at the end of this year or early next year. But Mathews did drop some hints.
“We’re leveraging some optical technologies to allow flexibility in how we interconnect racks together without the overhead traditionally associated with the aggregation and core layers [of a data center network]. We can connect racks directly together and enable direct east-west capacity. Servers want to talk to each other, so our physical topologies are optimized for east-west traffic. And we use software-defined networking to flexibly orchestrate that bandwidth based on where it’s actually needed.”
Plexxi’s secret appears to focus on the physical layer of the network stack. The company is virtualizing the physical layer with optical technology somehow.
“We set out to build a network where you can ‘move the wires,'” he said. “That’s where we brought in optical technology, because we can use things like wavelengths and lambdas that are provisioned in software and say, ‘OK, this rack needs to be connected to this rack. And it doesn’t just need a 20 gigabit connection. It needs a 100 gigabit connection.’ Those kinds of dynamic capacity issues are only possible where you get down to virtualizing the physical layer. Optical technology allows us to view those physical layers as wavelengths of light.”
“We can move capacity by changing the way wavelengths of light are distributed across the network,” he said. “Therefore, we can say, ‘You need 20 gigabits now. We’ll take the 10 gigabits you’re not using over here and put it there. That’s not possible today with just a [SDN] flow table.”
SAN DIEGO — Next month Petco Park, the home of Major League Baseball’s San Diego Padres, will turn on a distributed antenna system and a wireless LAN to provide wireless access for all of its fans.
The Padres, a Cisco customer, opened the gates of its ballpark to the technology press during Cisco Live this week in San Diego. Like most sports stadiums, mobile access at Petco Park has been a challenge, according to Steve Reese, vice president of technology for the Padres. The building is made tens of thousands of tons of concrete and steel. And the thousands of fans to come to games compete for limited bandwidth, overwhelming the macro cells of the mobile service providers in the area.
“We had great intentions [when we built this park], but something was missing,” Reese said. “What wasn’t factored in was the speed of technology as time moved on. The [mobile] expectations of fans that come into this facility today aren’t met.”
Reese is trying to address this. He has deployed a distributed antenna system (DAS) with 460 individual antennas. The “anchor carrier” for DAS is Verizon, he said, but it has capacity for three other carriers.
Backing up the DAS is a wireless LAN composed of 423 Cisco 3602e access points (APs), rugged outdoor APs designed for stadium deployments. The 3602e features a narrow 36-degree broadcast radius as opposed to the typical 180-degree radius. This enables the Padres to target specific sections of the ballpark without intersecting with the streams of neighboring APs.
The Padres have also installed a dual-core of Nexus 7009 switches to handle all this mobile traffic.
Reese said that 50% of Padres fans come to the park with mobile devices and about 12% of them will connect to his Wi-Fi network rather than the cellular networks. The Padres’ mobile infrastructure is robust, but Reese remains unsure of what will happen.
“A variety of people are coming in and we don’t know what they’re going to do,” he said. “One thing is iCloud. Thirty-five percent of the phones coming in are going to be iPhones and they will be trying to sync with iCloud. How are we going to deal with that?”
The Padres will get some answers when the team turns on the wireless network during baseball’s all star break in mid-July.
Cisco Systems has enhanced its online lab environment, Cisco Learning Labs to make it more user-friendly. It has also enhanced the environment to allow networking pros to practice certain tasks that usually require access to physical hardware.
Learning@Cisco launched Cisco Learning Labs a year ago as an online environment geared toward helping networking pros practice lab exercises for Cisco certifications. The lab environment uses a version of IOS that runs on UNIX. It’s positioned as a cheap alternative to home labs or the physical labs that some certification boot camps that rent out. Cisco Learning Labs rents out 25-hour blocks of online lab time at a starting price of $25.
The new version of Cisco Learning Labs has been dressed up with an updated user interface to make it more intuitive for users, according to Marcello Hunter, product manager for Learning@Cisco.These updates have been added to the ICND1, ICND2, ROUTE, SWITCH and TSHOOT labs. These labs now feature helpful elements like “question bubbles,” that new users can click on to get hints on what to do.
“A lot of our customers working on these labs are newcomers to the networking world and newcomers to the complexities of working with networks,” Hunter said. ‘Just learning how to set them up and launch them and learning how to work with the user interface of IOS was new to them.”
The lab environment also has a series of new instructional videos that accompany the ICND1 and ICND2 labs.
Cisco has also introduced an expanded set of lab exercises based on switch functions that are typically implemented in hardware and are thus not available via an IOS software image. These functions include Layer 2 port security and port channel interfaces. The Layer 2 port security feature will expand the labs exercises available for all Cisco certifications. The port channel interface will expand labs for certifications CCNP and above.
In a flat global enterprise router market, Huawei managed to wring out some robust growth in the first quarter of 2012, according to Infonetics Research.
Enterprise router revenue dropped to $834 million last quarter, down 9% from the fourth quarter of 2011, Infonetics Research announced this week. Year over year the market grew just 2% over the first quarter of 2011. Pretty grim from a global perspective. However, the North American slice was slightly better. It’s down 10% from quarter to quarter, but up 8% year over year.
Despite the flat market, Huawei reported huge growth in enterprise router sales, which is surprising since North America was the primary area of growth. Huawei is just starting its push into the region’s enterprise market. Its home market of China was extremely weak last quarter, down 20% from the fourth quarter of 2011. But Huawei posted 79% revenue growth year over year in and 130% growth in units shipped, Infonetics reported.
Cisco Systems remains in the driver’s seat, with 74.6% of the market revenue share last quarter, a 12-month high.
Juniper Networks’ QFabric solution can be tricky for tech journalists to cover. When Juniper boasts of QFabric customers, those boasts are very nuanced. Not all QFabric customers are really QFabric customers.
The industry is waiting for Juniper to offer customer success stories with the full QFabric architecture, which includes: the QFabric Node, a top-of-rack device also known as the QFX3500; the QFabric Interconnect transport device; and the QFabric Director, which serves as the management and control plane. This is the system that Juniper is trying to sell to the market.
And yet, the QFX3500 is also a very good top-of-rack Ethernet switch that can be deployed in a traditional data center network. Juniper hopes that some of the customers who buy the QFX3500 and use it as an Ethernet switch will eventually buy into the overall QFabric architecture, but there’s no guarantee that they will.
Given that IT organizations are eager to talk to reference customers who have deployed a full QFabric architecture, each press release about a new QFabric customer draws scrutiny. But the tech media can find it tough to parse press releases. For instance, Juniper just announced that the Hong Kong Exhanges and Clearing, Hong Kong’s stock exchange, is building a new data center with QFabric top-of-rack technology. The press release states:
Juniper Networks QFabric technology will be deployed to provide 10 Gigabit per second Ethernet (GbE) top-of-rack connectivity…
Meanwhile, the core of the network will be a “consolidated core network with traffic isolated across each system.” This sounds like a traditional network with QFX3500 switches at the server access layer. This is not a QFabric architecture.
The press release does say that this is a “carrier-class switching infrastructure based on the Juniper Networks QFabric architecture.” That does confuse the issue a bit. Is this network based on the QFabric architecture or a traditional network architecture that just happens to feature the QFX3500s that can operate in either environment?
Nowhere in the press release does Juniper mention the QFabric Interconnect or Director. This is a customer win for Juniper but it is not a QFabric win. Light Reading Asia suggests otherwise with the headline “Juniper’s QFabric Lands in Hong Kong:”
It’s important to distinguish between QFabric architecture deployments and QFX3500 sales. I’m very much looking forward to talking to production QFabric customers when Juniper makes them available, but HKEx doesn’t appear to be one.
LAS VEGAS — As I was ambling toward an 8 a.m. meeting at Interop 2012 this week, I noticed a young woman dressed rather provocatively. She took the escalator just ahead of me and her appearance was striking. I thought to myself, “Wow, she is dressed inappropriately for this show!” A tiny black top and short, short, short black shorts that left just a minimum to the imagination.
At the top of the escalator she was greeted by an identically-dressed young woman. That’s when it clicked in my sleepy head. Booth babes!
Booth babes (or spokesmodels if you prefer a less pejorative term) are ubiquitous at trade shows that draw a heavy male audience. Car shows. Computer graphics conferences. IT shows. These are women hired for their looks. They are sex objects. There’s no getting around that fact. And the companies that hire them do a grave disservice to the industry.
I can appreciate an attractive woman like any other guy. However, as a journalist who covers the IT industry, I lose a lot of respect for companies who rely on them. I passed by booths belonging to Net Optics and Barracuda Networks several times on my way to other meetings and I did not fail to notice the platoon of scantily clad women hired to lure men close enough to scan their Interop badges. Anything for a sales lead, right? Are those sales leads really worth it when you’re promoting sexism?
I’m probably in the minority on this issue, but I am sick of the atmosphere created by these smokesmodels (And I’m not just saying that because my girlfriend might be reading this). I refuse to stop at any booth that features these women. These companies should find a better way to attract potential customers than fast cars and beautiful women. Leave cheap tricks like that to the auto industry. I don’t want to be associated with something like that, so I’ll stay clear. There are plenty of companies who rely on industry professionals to engage the media and potential customers at the show.
Moreover, there are a lot of professional women on Interop’s showroom floor — marketing professionals, engineers, IT managers, technology executives. Men outnumber women ten-to-one in this industry, and we all know there are a many reasons for this. Reason number one is staring you right in the face when you let some teenage girl with a winning smile and fishnet stockings scan your badge. They might have a nice smile, but companies that hire these models are creating a hostile work environment for our female colleagues. That’s not just some politically correct term. It’s a fact.
Enterprises tend to be conservative with their data center networks, which makes Juniper Networks’ efforts to displace legacy architectures with its new data center network fabric, QFabric, a challenge. Juniper needs public reference customers who have deployed a full QFabric solution in order to attract risk-averse prospective customers.
In yesterday’s first quarter 2012 earnings call, Juniper revealed the first customers who have deployed a full QFabric system. According to a transcript of the earnings call via Seeking Alpha, Stephan Dyckerhoff, Juniper’s EVP, Platform Systems, said:
We now have over 150 customers for the QFX product line, and are seeing them embrace the solution in a variety of different configurations, ranging from top-of-rack installations to full fabric deployments.
We are pleased to have the first full fabric deployments running in live production. Those deployments include Qihoo 360 in China and Australia-based Oracle [Orica]. In Q1, we also had a QFabric win in Europe at Jan Yperman hospital in Belgium. Customer feedback overall is good, and we are encouraged with the pipeline we are building.
Juniper has caught some flack in the industry for the slow roll-out of its full data center fabric and a general lack of reference customers who have deployed a full fabric. Most initial customers of QFabric have been deploying the QFX3500 as a traditional top-of-rack switch. This device operates as a “node” in a full QFabric solution. It’s reasonable to assume that all of those QFX3500 customers are at least considering a full-fabric deployment, but it’s not guaranteed.
Juniper did put me on the phone with a healthcare-focused cloud provider (Codonis) a few months ago to discuss its plans for a full-fabric QFabric installation, but that implementation was mostly in the planning stage. Juniper has also announced that Deutsche Boers (operator of the Frankfurt Stock Exchange), Thomson Reuters, Bell Canada and Terra (Brazilian online media company) are all designing full-fabric deployments of QFabric, but none of those companies have announced whether they have put the system into production yet.
The lack of North American customers with production deployments is troubling. Juniper needs reference customers that U.S. companies can talk to. Qihoo 360 is a Chinese web security software developer. Will a stateside network architect be impressed by that reference? I doubt it. Jan Yyperman Hospital is a former Nortel Networks reference customer, so I’m assuming Juniper displaced a legacy Nortel network in its data center. That could be a promising reference when it eventually gets QFabric in production. Oracle Australia is a nice win. Will Oracle adopt the technology elsewhere? Network architects will want to know. Orica is an Australian chemical company. (A transcription error by Seeking Alpha suggested Oracle is a customer).
A Goldman Sachs analyst on the earnings call asked Juniper to specify how many of its 150 QFabric customers have deployed a full fabric. Juniper Dyckeroff declined to be specific:
…[W]e have a mix of deployments for the customers who have adopted the QFX product line. They range from top-of-rack to full fabric. The reason they adopt the product line is because we have strategic alignment with them on the architecture that they want to deploy going forward. And so the focus for us is to give them a great experience as they adopt the key pieces of technology and there’s a good number of them that actually adopt the full fabric…
Juniper’s slow roll-out of QFabric has been unfortunate, especially since much of the early hype surrounding the technology has been usurped by the rise of software-defined networking. The two technology’s aren’t necessarily interchangeable, but web-scale companies and cloud providers (a sweet spot for QFabric) are looking hard at software-defined networking, which has got to be a challenge for Juniper.
Many of the bloggers who have analyzed and reported on the news of Insieme, Cisco’s latest spin-in, have talked about how the company’s formation is a morale-killer for Cisco employees. The concern is justified. Cisco’s spin-in approach enriches a select number of employees recruited to join the likes of Insieme and other past spin-ins like Nuova Systems, leaving engineers who work on products like the Catalyst line or the Aironet products to wonder when their big payday will come. That can lead to a brain drain as engineers bolt for a company that gives them a better opportunity to shine.
But what about the technology strategy that is coalescing around Insieme and the other moves Cisco is making with software-defined networks? Shouldn’t the bigger concern be that Cisco might be making a strategic blunder?
Last week Cisco circulated an internal memo that confirmed for employees its $100-milllion investment in Insieme, the spin-in that will form part of Cisco’s “build, buy, partner” strategy for software-defined networking (SDN). The Cisco memo, published by Om Malik, claims that the networking industry hasn’t yet settled on a definition for SDN, let alone a value proposition:
Because SDN is still in its embryonic stage, a consensus has yet to be reached on its exact definition. Some equate SDN with OpenFlow or decoupling of control and data planes. Cisco’s view transcends this definition.
As Brad Casemore points out, here is Cisco’s opening salvo. It’s going to resist, or at least play down the value of, one of the core attributes of software-defined networking: the decoupling of the control and data planes. There is a little bit of cognitive dissonance with this statement. This decoupling of the control and data planes is an essential foundation of SDN. It enables centralized, flow-based networking. It enables programmability. It enables organizations to deploy third-party applications on a network through an SDN controller. But Cisco claims to transcend this idea. This vague dismissal should be troubling to SDN proponents.
The memo goes on to quote Cisco CTO Padmasree Warrior to support this notion:
“If you ask five customers what SDN means to them, you may get five different answers. Customer motivations and expectations are different based on their business problem or deployment scenario,” Warrior says.
It’s true that some people new to the subject initially perceived OpenFlow as an architecture, rather than just a protocol that enables SDN. Once they get educated on the subject, few networking pros express much confusion on the matter. However, is Cisco’s view really transcending the current SDN definition. This memo muddies the waters a bit by claiming that Cisco’s Nexus 1000v virtual switch is an example of SDN?
While SDN concepts like network virtualization may sound new, Cisco has played a leadership role in this market for many years leveraging its build, buy, partner strategy. For example, Cisco’s Nexus 1000V series switches—which provide sophisticated NX-OS networking capabilities in virtualized environment down to the virtual machine level—are built upon a controller/agent architecture, a fundamental building block of SDN solutions. With more than 5,000 customers today, Cisco has been shipping this technology for a long time.
Sure, the Nexus 1000v introduces a version of SDN to the extreme edge of a virtualized data center, but it doesn’t come close to achieving the network agility and programmability promised by software-defined networks enabled, or not enabled, by OpenFlow. What about the rest of the data center LAN, filled with physical switches that are so constrained that both the IETF and the IEEE are re-engineering Ethernet in order to eliminate a legacy protocol like spanning tree?
Proponents say that SDN has the potential to eliminate spanning tree by defining flow routes centrally in a server-based controller, thus eliminating the risk of loops. Why upgrade to Shortest Path Bridging (SPB) or Transparent Interconnections of Lots of Links (TRILL), when an SDN network that can do it? If you want to use TRILL or SPB in your data center network today, you need to upgrade to the newest generation of your vendor’s switches, and you won’t be able to reverse course midway through. These vendors won’t play together. You can’t mix Brocade’s iteration of TRILL with Cisco’s. You can’t mix Avaya’s iteration of SPB with Cisco or Brocade. You probably wouldn’t want to mix vendors in your data center, but you also want investment protection with these new data center fabrics, don’t you? Five years from now when you need to refresh the server access layer, you’re locked into whatever vendor you’ve chosen.
You can ditch spanning tree in an OpenFlow-based SDN network using any combination of switches that support OpenFlow. Heck, Nicira Networks claims its product can get you there without even using OpenFlow switches. Just leave your legacy network in place. You know who is using OpenFlow? Google. You know who is using Nicira? eBay. Fidelity. Rackspace. NTT. Concerns about scalability with SDN may be justified, but some heavyweight companies have put it into production.
But never mind that for now. The Cisco memo expounds on the virtues of open, programmable networks (something that Arista Networks has offered for a couple years now). Toward the end, the memo lifts the veil off of Cisco’s SDN approach.
“Our strategy is to continue to offer choices to our customers so that they are not forced to go down a single path,” Warrior says. “We have a multipronged approach that goes beyond current perceptions of SDN, leveraging business-based use cases as building blocks so that we achieve architectural consistency and bring to bear the richness of all our capabilities.”
Warrior adds that Cisco already builds a lot of intelligence into its network silicon and software. Making them open and programmable will further unlock the value, while enabling further application awareness.
I will give Cisco credit here. the industry needs more “business-based use cases” for SDN. Midsized enterprises and even many large enterprises do not need SDN today. The networking pros at these smaller companies who ask me about SDN are interested in the technology, but mostly they just want to stay current with technology. They don’t need it. Today the emerging SDN market is focused on serving the needs of larger enterprises and web-scale companies. Broader business cases for the technology are years away. Many SDN start-ups are focusing on cloud providers and web giants rather than enterprises.
However, the mention of network silicon above (translation: ASICs) worries me. Here we have Cisco saying that it will make its ASICs and its software (IOS, NX-OS) open and programmable. Just how open and programmable will Cisco’s technology be? Look at this job posting for a software engineer at Cisco (It may not last long. It’s been scrubbed of certain details since I first reported its existence a couple weeks ago. This and another job posting (which disappeared from Cisco’s website a few days ago) made many references to a ConnectedApps team that is developing APIs for a software development kit (SDK) that will open up Cisco’s technology to third-party developers as part of a SDN initiative.
Just how open and programmable will an initiative based on APIs be? This doesn’t sound like an API for OpenFlow. It sounds like something else, given Cisco’s downplay of OpenFlow. APIs are a way to allow third party developers to hook their software to another vendor’s proprietary software. There’s nothing particularly open about it. SDN is about more than hooking third-party software to the edge of Cisco’s black box, whether that black box is in the form of software or an ASIC. SDN is what it is: Networks defined by software rather than hardware. How do you do that? By opening up the black box of networks and letting engineers build their networks in new ways. There is a control plane and there is a data plane. SDN decouples them and opens up the network to a whole new world of possibilities. It’s as simple as that.
In a few years, more IT organizations will want an open, software-defined network. Cisco needs to find a way to be relevant in such a world. APIs won’t cut it.
Keeping up with emerging start-ups in the software-defined networking (SDN) market is becoming a full-time job.
Most of the SDN buzz centers on Layer 2/3 networking. That’s what is dominating the agenda at this week’s Open Networking Summit. However, a smaller group of start-ups are focusing on SDN at Layer 4-7.
Today network engineers virtualize Layer 4-7 services by deploying software images of leading network appliance vendors on x86 server hardware. These software images, often labeled as virtual appliances, are available from several WAN optimization and application delivery controllers vendors, for instance.
Enterprises achieve scale with this approach by adding more virtual appliance images. However, bottlenecks will remain inevitable.
“The real problem here is the operating system itself,” said Steve Georgis, CEO of LineRate Systems, a new start-up that specializes in virtual Layer 4-7 services. “Linux was designed to be a general purpose OS, not a network OS. The network stack spends a lot of time managing network connections. Every time you add a network connection, the amount of time that stack spends on managing connections grows and it can spend less and less time managing the actual packets. As you scale up to the thousands of simultaneous connections, the operating system is left with very little time to do any real work. You run into pretty dramatic bottlenecks and throughput falls off quickly.”
Some enterprises will eliminate these bottlenecks by attaching a network acceleration module to a server to offload some of the processes that can overwhelm a server’s CPU, like TCP termination on an application delivery controller. Unfortunately, once you add these modules, you are pretty limited in how you deploy Layer 4-7 services. You can ‘t stand-up a new application delivery controller just anywhere. You have to put it on a server with the module.
LineRate Systems emerged from stealth mode today with a new acronym: SDNS (Software-defined network services). Its technology, the LineRate Operating System (LROS), is a re-engineered network stack for a Linux kernel that enables wire-speed throughput on a Linux server. Georgis claims that this can deliver 20 to 40 Gbps of network processing capability on a commodity x86 server with extremely high session scalability (hundreds of thousands of full-proxy Layer 7 connections per second and more than 2 million concurrent active flows).
LineRate has done some additional software engineering under the hood, including some work to eliminate blocking among cores within a multi-core CPU.
On top of this LROS, LineRate is offering LineRate Proxy, a product that operates as a full proxy for Layer 4-7 services on commodity server hardware. It includes several features: load balancing, content switching and filtering, SSL termination/origination, ACL and IP filtering, TCP optimization, DDoS blocking and an IPv4/IPv6 translation gateway.
Georgis said LineRate will develop more functionality in security, network monitoring, and Layer 7 switching in the future. The company is initially targeting cloud providers, but it expects to develop an enterprise market, particularly among companies that are building private clouds.
Application-aware firewall vendor Palo Alto Networks has filed for an IPO that could signal big competitive trouble for Cisco and Juniper Networks.
Though Palo Alto has not yet turned a profit (the company reported a loss last year of $12.5 million), it more than doubled revenue to $119.6 million in 2011 from $48.8 million in 2010.
Many believe that growth came from customers that couldn’t find comparable features in Cisco and Juniper products and jumped ship. Blogger Brad Reese points out that while Palo Alto’s revenues soared +141% in the six months ending January 31, Cisco saw a revenue increase in the same time period of only +7.7%.
If Palo Alto’s gains have been direct losses for Cisco and Juniper, things only stand to get worse if Palo Alto goes public. After all, many large enterprises are hesitant to invest millions in a company that isn’t public and financially stable.
“No one is going to spend $20 million on a product from a company that isn’t public,” said one engineer at a multinational consulting firm, who recently made an initial Palo Alto test investment. “When I went to do a first pass [on buying firewalls], it was a half million bucks. It’s a big commitment to change a firewall product. You’re signing on for a long-term relationship with subscription services in addition.”
Even more threatening to Cisco and Juniper is that this engineer – like others – have found Palo Alto technology superior to the competition.
“When I told the the other vendor that I wanted IDS, antivirus and content inspection, they looked at me like I had three heads. When I said that to Palo Alto, they said, ‘of course you would do that, why wouldn’t you?’” he said. “If you look at performance statistsics on a box from another vendor, they tell you what the performance is on a per-service basis, but they don’t tell you what happens when you turn all services on. That’s not the case with Palo Alto.”
That’s likely because Palo Alto has created next-generation, application-aware firewalls from the jump — never having to adapt legacy technology to do new tricks. The company was founded in 2005 by Nir Zuk, who had been CTO at NetScreen before it was acquired by Juniper. As some tell it, Zuk went to the Juniper board with the message that firewalls had to become application-aware. Juniper eventually followed that advice, but not soon enough for Zuk, who founded a company based on the idea that next generation firewalls should offer application-level monitoring with transaction detail and constantly updated signatures. Since then, Gartner has dubbed next-generation firewalls as mainstream and Cisco recently announced the launch of an application-aware firewall. Juniper has also announced similar features. It remains to be seen whether the more established vendors can catch up.