Gartner’s Magic Quadrant for application delivery controllers has a few new faces this year and a new leader.
Application delivery controllers (ADCs) are Layer 4-7 devices that evolved out of the load balancer industry. ADCs optimize applications deployments within a data center, performing a variety of tasks such as SSL offloading, web application firewalling, and application acceleration. Websites use them extensively but enterprises also make broad use of them for big and complex enterprise applications like ERP systems.
Gartner’s Magic Quadrant (MQ) is a market assessment device used to evaluate both the ability of vendors to build effective and innovative products (completeness of vision) and their ability to market and sell those products (ability to execute).
The Leaders (high ratings in vision and execution): F5 Networks, Citrix Systems and Radware
F5 and Citrix remain leaders in the application delivery controller (ADC) market for yet another year. Citrix drew praise for being a leader in virtualized ADCs and its rich features and deep understanding of applications. Gartner sees good potential for Citrix to bundle its virtual ADC with its Xen hypervisor products.
F5 continues to dominate in both technology and sales. It has strong customer loyalty, due in part to its DevCentral user community portal and its iRules scripting language and iControl API — technologies that have made F5’sADCs extremely customizable. Gartner cautioned that F5 is very reliant on hardware innovation; whereas competitors are doing more in software. Some vendors, like Zeus Technology, doing nothing but software, relying on industry standard servers for deployments of their technology. Gartner claims F5 also has limited features and functionality in its lower-end hardware, forcing smaller customers to spend a lot of money to get the features they want.
Radware, meanwhile, has climbed into the leader category from the visionaries box, thanks in part to the successful integration of its Nortel Alteon acquisition. Analysts praised Radware’s vision for how ADCs fit into virtualized and cloud architectures.
Visionaries (high rating for vision): Zeus Technology, Strangeloop, ActivNetworks and Aptimize.
Here’s where things get a little interesting. Gartner has added three newbies to the MQ this year and all of them are here in the visionaries box, joining the software-based ADC vendor Zeus Technology. ActivNetworks, Aptimize and Strangeloop are the new players here and each of them has a unique specialty (Technically, Aptimize is straddling the line between visionary and niche player). ActivNetworks sells a virtual ADC that optimizes mobile traffic and video streaming. Aptimize focuses on messy, browser-based apps. Strangeloop specializes in HTTP optimization. Gartner says these new vendors, particularly Aptimize and Strangeloop, are often deployed in tandem with ADCs from one of the more advanced vendors on the market.
Challengers (high rating for execution): None, same as last year.
Niche Players (low ratings for vision and execution, but generally considered good and viable options for specific environments): Cisco Systems, A10 Networks, Brocade, Array Networks, Barracuda Networks and Crescendo Networks.
Despite holding the number two market share position, Cisco continues to remain in a niche player. Gartner says Cisco makes most of its money here in straightforward load balancing and it has limited application expertise compared to other vendors, which inhibits its ability to help with complex applications.
This week Cisco’s new cloud CTO Lew Tucker is traveling the country to meet with journalists in a coming-out party of sorts. This wouldn’t be so noteworthy for a new Cisco exec except that Tucker – who led the cloud initiative at Sun – embodies software culture and his presence is indicative of what Cisco is trying desperately to become – a cloud software player, and most definitely not your daddy’s network hardware company.
Tucker—who was a co-creator of Sun’s little-exposed open source network virtualization project Crossbow – refers to the network as a “distributed application” and tosses about terms like “orchestration” and “automation” in relation to the network. He laughs gently at the concept of hardware-driven networking folks learning to become part of development communities and he thinks networking pros “might like” to play with APIs. He even talks about Cisco’s participation in Open Stack – the open source cloud management software project. But don’t get too excited – Tuckers cautions that Cisco will be bringing thoughts to the Open Stack table not making its own software open.
Specifically Tucker is taking to town the message that Cisco will be a key provider of network management and automation software for the cloud. And in doing so, Cisco would like to be seen as the provider of the “virtual private data center” –a cordoned-off enterprise data center in a publicly hosted cloud. That would mean selling the automation and management software necessary to enable these clouds (see the acquisition of LineSider Technologies this week), as well as the high-performance networking components necessary to support them.
But if Cisco is aiming to help carriers and very large enterprises build these publicly hosted private clouds, won’t that ultimately lead to many fewer on-premise data center LANs? In that case, just what will Cisco sell? Does the company believe that network management and automation software licensing will transplant hardware component sales? And just how will that message sit with Cisco’s core networking audience? After all, that’s a group highly invested in building and managing their own networks.
With this in mind, Tucker has a very fragile line to walk. In addressing these quandaries, Tucker promises Cisco won’t abandon its hardware roots and has plenty of units to sell in both building out these clouds and enabling companies that are not yet moving to the cloud. In the meantime, Cisco will also be selling both software and hardware appliances that will enable management of both virtual and physical networks, and will increasingly move to a software licensing model for a host of offerings that range from automation to use of hosted unified communications.
To be sure, Cisco is ahead of its networking rivals when it comes to virtualization and cloud management. But Cisco has also received flack for diluting its core networking focus with investments in side businesses like Flip cams. In the meantime, HP’s networking market share is apparently soaring and companies like Arista are offering equipment that rivals Cisco’s at much cheaper prices. It will be interesting to watch Cisco’s balancing act in coming months.
Have you ever noticed that networking vendors rarely address the subject of emulators built with their router software? To some degree, network engineers seem fine with this. As long as Cisco looks the other way, engineers can continue to use Dynamips router emulators using shadily licensed IOS. As long as Juniper looks the other way, engineers can continue to build Olive emulators using JUNOS.
However, Cisco isn’t looking the other way as much as it used to. As blogger Aaron Conway noted today, Cisco is making it harder and harder to download Cisco software without support contracts. Networking bloggers have been squawking about this for months. The actions by Cisco prompted blogger Greg Ferro to start a petition back in July asking Cisco to create an IOS educational licensing option (the petition is currently not working).
Juniper hasn’t made any moves to make it harder to work with Olive as far as I can tell, but the company would be well within its rights to do so. Cisco has a perfect right to crack down on IOS licensing, too. But it sure would be nice of these vendors to address the issue of emulators directly.
Even though Cisco has made it harder for engineers to run an IOS emulator in a lab, I haven’t seen Cisco actually acknowledge that these changes are aimed at Dynamips and other emulators. I’ve never seen Cisco even acknowledge the existence of something like Dynamips. If you run a search for the word Dynamips on Cisco’s web site, you get back just one result: A transcript of a panel discussion at Cisco Live 2009 entitled “Insiders Guide to Cisco Career Certifications.” In the transcript, Cisco employee and NetworkWorld blogger Jimmy Ray Purser describes Dynamips as the “best way” to do IOS emulation at a zero cost.
Other than that, Cisco has never really addressed Dynamips or other emulators directly. Even when Jessica Scarpati asked Cisco to comment for a story she did on the Dynamips crackdown, the vendor chose not to address the emulator’s existence directly in its response.
Why do vendors like Cisco and Juniper avoid discussing these emulators directly? Wouldn’t some clarification on the tools help everyone? Wouldn’t a formal educational licensing structure be good for vendor’s customers?
Other vendors have made their operating systems much more readily available for learning. Startup Arista Networks has released a free version of EOS, the software it runs on its switches. Meanwhile, open source vendor Vyatta has built its whole business around making its routing software free to everyone. There’s no question that engineers can learn a lot about networking with this free software. Perhaps other vendors should follow their lead.
Regardless of whether vendors like Juniper and Cisco want to ignore or restrict the use of emulators like Olive and Dynamips, I think the community of networking pros who use these emulators to learn the technology and grow in their careers could benefit from some clarification on this issue. Just tell engineers where they stand. Listen to their request for educational licenses. Don’t let them go on working in this legal gray area.
Tiernan Ray at Barron’s blogs that HP Networking’s market gains appear to be coming directly at Cisco’s expense. He notes that HP’s Q4 earnings detailed a 300% increase in networking revenue thanks to the 3Com acquisition and that HP’s own ProCurve products saw a 50% increase year over year. Meanwhile, Cisco’s revenues reported earlier this month were solid, but the company issued guidance for next quarter that was very soft, about $1 billion lower than Wall Street analysts were predicting.
An important catch by Ray: During Cisco’s earnings call, the company said sagging sales to state and local governments, down about 25%, were a major challenge. However, HP CFO Cathie Lesjak claimed her company’s great quarter was partially due to rising sales in state and local government accounts. Is this an early indication that government IT shops are looking to HP as a cheap alternative to Cisco in their networks? Will the private sector follow?
Like celestial bodies wandering the cosmos, networking vendors and wireless LAN vendors are drawn to each other’s inescapable gravity . Wired networking vendors have been buying wireless LAN vendors since the dawn of the wireless LAN controller. Cisco Systems, for instance, had little more to offer than Wi-FI hot spots until it bought Aironet in 1999. HP bought Colubris, and later acquired another WLAN product line with its 3Com deal. Enterasys Networks inherited a WLAN product line when it merged with SIemens Enterprise Networks. Extreme Networks and Brocade have OEM relationships with Motorola. Would it be terribly shocking if Motorola decided to buy Brocade or Extreme?
And now Juniper Networks has finally acknowledged its inescapable attraction to WLAN, announcing yesterday that it had struck a deal with Belden to buy Trapeze Networks for about $152 million. Belden, a network cable manufacturer, bought Trapeze two years ago for about $133 million.
Juniper has become a strong Cisco alternative in the campus networking space with its growing line of EX switches, but the nature of office networks is changing. A great many offices today still have plenty of Ethernet cables and ports pulled to every desk. But more and more of those offices also have a wireless LAN overlay, so that employees can unplug their laptops and carry them to a meeting or the lunch room without losing network access. Yours truly has that option today.
It’s only a matter of time before some enterprises decide to cut down on the number of ports they pull to desks and start replacing some of the switches in their wiring closets with WLAN access points. Juniper is expanding and future-proofing its foothold in campus networks by expanding into wireless LAN.
Juniper will also have an opportunity to integrate its wired networking products with Trapeze’s WLAN technology. Wired and wireless integration, for simplified deployment and management, has been much hyped about these past couple years, but very little has been done in the area.
For some ideas on how that integration might unfold, check out Andre Kindness’s Forrester Research blog.
Vendors can push Fibre Channel over Ethernet (FCoE) all they want, but the technology is simply not ready for deployment, argues Stephen Foskett, Gestalt IT community organizer, who presented this week at the Large Installation System Administration (LISA) conference in San Jose.
That’s not what Cisco had to say about FCoE. While we were in San Jose, company officials explained how users can start slowly with FCoE implementation at the access layer. Foskett, however, points out that most vendors don’t have a full FCoE solutions just yet (that includes Cisco, which won’t have a core FCoE play until at least early 2011). According to Foskett, vendors use FCoE as a product differentiator and as a protector of Ethernet.
Foskett’s LISA presentation was called “Storage over Ethernet: What’s in it for me?” The answer to that question is “Not a whole lot” when it comes to FCoE deployment for now. But Foskett is a fan of iSCSI. Find out why …
Implementing Fibre Channel over Ethernet (FCoE) for converged data center networks doesn’t mean users have to invest in a complete rip-and-replace … even when implementing Cisco FCoE. We met with Kash Shaikh, Cisco’s data center solutions senior manager of marketing, who explains in this video that users can start Cisco FCoE implementation in the access layer with converged network adapters at the servers, and later move toward FCoE in the core.
Recently Motorola announced a significant change to its wireless LAN architecture with its WiNG 5 announcement. With WiNG 5, Motorola is running identical firmware across its wireless LAN controllers and access points. Its access points have enough memory and processing power to operate independently from a controller, allowing enterprises to deploy controllerless WLAN infrastructure.
This new architecture allows an access point to perform some of the high-level security, policy and RF management roles that have traditionally been centralized in a controller.
At first glance it appeared that Motorola was going the way of start-up Aerohive, which has had a controllerless approach to WLAN from its inception. However, Motorola isn’t dumping the controller appliance altogether. It still has a role, but Motorola admits that the role is evolving. In fact, from what Motorola says, it sounds like everything about the WLAN controller is evolving.
Manju Mahishi, Motorola’s director of product management, told me that WiNG 5 is meant to give enterprises flexibility in deployment and to avoid bottlenecks associated with backhauling high throughput 802.11n data through centralized controllers. But he said that controllers will not be disappearing from Motorola’s WLAN architecture.
“We believe very strongly that in the vast majority of cases, depending on the number of access points in a local site, you can get away without having controllers. Up to 24 access points can be deployed without any controller,” Mahishi said. “But there are scenarios where we still see certain enterprises customers will still want to pull data centrally. They want to do all data processing through a controller, whether on specific VLANs or on guest access. Even though we see the benefits of distributed intelligence and having the access points doing all the work, there are still scenarios where [enterprises] will want to pull certain data if not all data through controllers, whether they are doing packet inspection or applying some security policies.”
He said there are some scenarios where the access points will simply not have the processing power to match Motorola’s high-end controllers. For instance, a highly subnetted network will require a controller. If a company wants to extend certain VLAN from a central campus out to branch offices, they will also use controllers to pull data back through a WAN.
Beyond the role of the controller, Mahishi said the format of the controller is also set for an evolution. He said Motorola’s OEM partnerships with Brocade and Extreme Networks are pushing the concept of a controller in a new direction. He said the ability to virtualize a controller and run it on a third party switching platform from one of these OEM partners could offer new ways of scaling a wireless LAN while simultaneously integrating it into the wired infrastructure.
“We can easily virtualize [controller] functionality,” Mahishi said. “When we were demonstrating WiNG 5, we were running it on a laptop. Clearly the intent is to be able to take this capability and run it on a cloud-based controller or any server-based appliance that can scale. The WiNG 5 architecture helps us get there.”
Networking pros will doubtless follow Motorola’s evolution of the controller-access point architecture very closely. Controllers from most WLAN vendors are extremely expensive and vendors like Aerohive and Meraki have made hay with customers by offering WLAN infrastructure that is free of a costly physical appliance. Aerohive’s access points collaborate as a virtual controller while Meraki offers cloud-based, subscription controller functionality, which transfers the controller function from a big-ticket capital expense to a low-cost, but ongoing, operational expense.
The days of being just a networking pro are officially over … or that’s what networking vendors would like you to believe.
Maybe that won’t be the case if your company never virtualizes its servers or applies dynamic provisioning or moves toward converged storage and data center networks.
So probably a fairer way to state it is: you can probably be just a networking pro for a little while longer. But then you’ll very likely be forced to provide networking that enables and even optimizes server virtualization performance, and you’ll be asked to figure out how your data center LAN and SAN can be managed as one.
Banking on that being the case (and hoping to sell their new technology strategies), HP rolled out an integrated infrastructure certification this month to rival Cisco’s Unified Computing and data center infrastructure specialist certs.
The HP ExpertONE converged infrastructure certification program includes network-specific certifications that teach skills in so-called next generation data center networks (read converged), as well as how to migrate from proprietary network technology (read Cisco-based networks) to multi-vendor “open network infrastructures.” The program also includes a systems component that teaches systems engineers how to apply IT to business processes, and includes Return-on-Investment (ROI) analysis in a converged infrastructure. It’s no coincidence that HP rolled the cert out during Interop New York, which has shifted its focus almost entirely to next generation networks that support virtualization and the cloud.
HP claims to be the first provider of integrated technology certification, but Cisco has long had its Unified Computing certifications that reach across servers, networks and storage. It also has a data center network infrastructure design certification that focuses on converged networks. These certs don’t, however, stretch across multi-vendor environments.
Vendors notoriously roll out certifications for technologies they want to sell – and all of these programs can be seen from that perspective. Still, with virtualization in some form experiencing uptake among 90% of most companies, and many of these same companies considering at least some form of private cloud implementation, it’s fair to say that networking professionals need to consider broadening their horizons.
“Integrated stack.” If you’re confused by that term, don’t be concerned.
Apparently it’s the buzzword for the integration of applications and virtualized network instances in a single stack that will enable dynamic provisioning of infrastructure and applications on demand. Still a bit confused?
At Interop New York Wednesday, Cisco’s VP of data center and virtualization Ben Gibson used his keynote sessions to tout Cisco’s role in this integrated stack. More solidly, he preached the idea that vendors must work together to co-develop and market pieces of the infrastructure (virtual or not) that enable dynamic provisioning – offering the entire picture “from the application to the disk.” Though Cisco set the tone for selling highly proprietary networking equipment in the ‘90s, Gibson harkened back to the era, calling on vendors to work closely together the way companies did to enable e-commerce.
As an example of multi-vendor strategies, Gibson noted Cisco’s Vblock alliance with EMC and VMware, which offers “pre-validated infrastructure pieces that bring together network, compute and storage.” Gibson also pointed to the HP-Microsoft alliance that also brings together hardware and applications.
“The vendor community has to work together in new and interesting ways to deliver solutions that drive our systems integration and drive simplicity,” Gibson said, calling it a “new way of thinking about things” to bring about a “single cohesive customer experience.”
Customers in the audience weren’t necessarily buying into the idea just yet. One network manager from a major telecom carrier said buying into “solutions” from these multi-vendor partnerships is not that different than buying technology from one vendor – “it’s still about lock-in to one prescribed system,” he said.
Some network engineers are just not ready to buy into the infrastructure-on-demand play. One Interop attendee, network engineer from a national health insurance firm, said his company would be ready to converge data center networks in about five years and wouldn’t consider infrastructure on demand until then. What’s more, he doubted that his users would understand the concept of automatic provisioning of infrastructure to meet their business needs. In turn, he doubted his engineers would be understand user business needs enough to supply the right applications automatically. He referred to the whole transitions as “a big learning process.”