The Network Hub


November 5, 2010  3:41 PM

Motorola on the future of the wireless LAN controller

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

Recently Motorola announced a significant change to its wireless LAN architecture with its WiNG 5 announcement. With WiNG 5, Motorola is running identical firmware across its wireless LAN controllers and access points.  Its access points have enough memory and processing power to operate independently from a controller, allowing enterprises to deploy controllerless WLAN infrastructure.

This new architecture allows an access point to perform some of the high-level security, policy and RF management roles that have traditionally been centralized in a controller.

At first glance it appeared that Motorola was going the way of start-up Aerohive, which has had a controllerless approach to WLAN from its inception. However, Motorola isn’t dumping the controller appliance altogether. It still has a role, but Motorola admits that the role is evolving. In fact, from what Motorola says, it sounds like everything about the WLAN controller is evolving.

Manju Mahishi, Motorola’s director of product management, told me that WiNG 5 is meant to give enterprises flexibility in deployment and to avoid bottlenecks associated with backhauling high throughput 802.11n data through centralized controllers. But he said that controllers will not be disappearing from Motorola’s WLAN architecture.

“We believe very strongly that in the vast majority of cases, depending on the number of access points in a local site, you can get away without having controllers. Up to 24 access points can be deployed without any controller,” Mahishi said. “But there are scenarios where we still see certain enterprises customers will still want to pull data centrally. They want to do all data processing through a controller, whether on specific VLANs or on guest access. Even though we see the benefits of distributed intelligence and having the access points doing all the work, there are still scenarios where [enterprises] will want to pull certain data if not all data through controllers, whether they are doing packet inspection or applying some security policies.”

He said there are some scenarios where the access points will simply not have the processing power to match Motorola’s high-end controllers. For instance, a highly subnetted network will require a controller. If a company wants to extend certain VLAN from a central campus out to branch offices, they will also use controllers to pull data back through a WAN.

Beyond the role of the controller, Mahishi said the format of the controller is also set for an evolution. He said Motorola’s OEM partnerships with Brocade and Extreme Networks are pushing the concept of a controller in a new direction. He said the ability to virtualize a controller and run it on a third party switching platform from one of these OEM partners could offer new ways of scaling a wireless LAN while simultaneously integrating it into the wired infrastructure.

“We can easily virtualize [controller] functionality,” Mahishi said. “When we were demonstrating WiNG 5, we were running it on a laptop. Clearly the intent is to be able to take this capability and run it on a cloud-based controller or any server-based appliance that can scale. The WiNG 5 architecture helps us get there.”

Networking pros will doubtless follow Motorola’s evolution of the controller-access point architecture very closely. Controllers from most WLAN vendors are extremely expensive and vendors like Aerohive and Meraki have made hay with customers by offering WLAN infrastructure that is free of a costly physical appliance. Aerohive’s access points collaborate as a virtual controller while Meraki offers cloud-based, subscription controller functionality, which transfers the controller function from a big-ticket capital expense to a low-cost, but ongoing, operational expense.

October 25, 2010  11:19 AM

New HP networking certification: You need to know more than just networking

rivkalittle Rivka Little Profile: rivkalittle

The days of being just a networking pro are officially over … or that’s what networking vendors would like you to believe.

Maybe that won’t be the case if your company never virtualizes its servers or applies dynamic provisioning or moves toward converged storage and data center networks.

So probably a fairer way to state it is: you can probably be just a networking pro for a little while longer. But then you’ll very likely be forced to provide networking that enables and even optimizes server virtualization performance, and you’ll be asked to figure out how your data center LAN and SAN can be managed as one.

Banking on that being the case (and hoping to sell their new technology strategies), HP rolled out an integrated infrastructure certification this month to rival Cisco’s Unified Computing and data center infrastructure specialist certs.

The HP ExpertONE converged infrastructure certification program includes network-specific certifications that teach skills in so-called next generation data center networks (read converged), as well as how to migrate from proprietary network technology (read Cisco-based networks) to multi-vendor “open network infrastructures.” The program also includes a systems component that teaches systems engineers how to apply IT to business processes, and includes Return-on-Investment (ROI) analysis in a converged infrastructure. It’s no coincidence that HP rolled the cert out during Interop New York, which has shifted its focus almost entirely to next generation networks that support virtualization and the cloud.

HP claims to be the first provider of integrated technology certification, but Cisco has long had its Unified Computing certifications that reach across servers, networks and storage. It also has a data center network infrastructure design certification that focuses on converged networks. These certs don’t, however, stretch across multi-vendor environments.

Vendors notoriously roll out certifications for technologies they want to sell – and all of these programs can be seen from that perspective. Still, with virtualization in some form experiencing uptake among 90% of most companies, and many of these same companies considering at least some form of private cloud implementation, it’s fair to say that networking professionals need to consider broadening their horizons.


October 21, 2010  5:46 AM

Cisco plays nice at Interop: Vendors must work together for infrastructure on demand

rivkalittle Rivka Little Profile: rivkalittle

“Integrated stack.” If you’re confused by that term, don’t be concerned.

Apparently it’s the buzzword for the integration of applications and virtualized network instances in a single stack that will enable dynamic provisioning of infrastructure and applications on demand. Still a bit confused?

At Interop New York Wednesday, Cisco’s VP of data center and virtualization Ben Gibson used his keynote sessions to tout Cisco’s role in this integrated stack. More solidly, he preached the idea that vendors must work together to co-develop and market pieces of the infrastructure (virtual or not) that enable dynamic provisioning – offering the entire picture “from the application to the disk.” Though Cisco set the tone for selling highly proprietary networking equipment in the ‘90s, Gibson harkened back to the era, calling on vendors to work closely together the way companies did to enable e-commerce.

As an example of multi-vendor strategies, Gibson noted Cisco’s Vblock alliance with EMC and VMware, which offers “pre-validated infrastructure pieces that bring together network, compute and storage.” Gibson also pointed to the HP-Microsoft alliance that also brings together hardware and applications.

“The vendor community has to work together in new and interesting ways to deliver solutions that drive our systems integration and drive simplicity,” Gibson said, calling it a “new way of thinking about things” to bring about a “single cohesive customer experience.”

Customers in the audience weren’t necessarily buying into the idea just yet. One network manager from a major telecom carrier said buying into “solutions” from these multi-vendor partnerships is not that different than buying technology from one vendor – “it’s still about lock-in to one prescribed system,” he said.

Some network engineers are just not ready to buy into the infrastructure-on-demand play. One Interop attendee, network engineer from a national health insurance firm, said his company would be ready to converge data center networks in about five years and wouldn’t consider infrastructure on demand until then. What’s more, he doubted that his users would understand the concept of automatic provisioning of infrastructure to meet their business needs. In turn, he doubted his engineers would be understand user business needs enough to supply the right applications automatically. He referred to the whole transitions as “a big learning process.”


October 11, 2010  12:28 PM

Tech companies dream up new and horrible ways to lay off workers

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

Why are some tech companies so bad at firing people?

Mark Fraunfelder at BoingBoing highlighted a Telegraph report about how Everything Everywhere, the mobile carrier created by the merger of Orange and T-Mobile, took artless and heartless firing to a new low. Employees were  herded into rooms by the dozen or by the hundred and shown a “traffic light system” that told them of their job status. If employees saw a red light, they were fired. If they saw a yellow light, they had to re-apply for their jobs. If they saw a blue light their jobs were fine. If they saw a green light, they were getting one of a small number of newly created jobs.

Rather than sit through further humiliation as management offered soothing information about severance, etc., many of those who saw a red light reportedly stood up and walked out.

Can it get any worse than this? How about fortune cookies baked with the good news inside: “Unemployment checks are in your future!” Perhaps a creative executive could modify one of those “Easy” buttons from Staples, replacing the recording of the guy saying “That was easy” with a sample of Donald Trump blurting out “You’re fired.”


October 8, 2010  4:22 PM

Motorola smartens access points, pulls back WLAN controllers

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

The role of the wireless LAN controller appliance is shifting dramatically. The days of the dumb access point are severely numbered. Motorola became just the latest WLAN vendor to deemphasize the role of its controller appliance in its architecture with its new WiNG 5 architecture.

As we know enterprise wireless LAN used to consist of a bunch of independent, “fat” access points that were basically islands of wireless with no centralized control. Then vendors like Aironet (now Cisco), Motorola and Aruba started introducing a controller-based WLAN architecture, which was much more scalable and (eventually) much more secure. This change opened up Wi-Fi’s potential from isolated hot spots to campus-wide, centrally managed deployments.

Now vendors vendors are pulling back the controller’s role in enterprise WLAN. Meraki has moved its controller functionality into the cloud, building access points that are smart enough to survive on their own when contact is lost with Meraki’s cloud. Aerohive has distributed most of the controller functionality throughout its access points, with a simple management and policy piece sitting on a server.

Enterasys-Siemens’ HiPath wireless LAN product line has also deemphasized its controller in recent years. The HiPath access points manage QoS, encryption and RF management on their own, leaving the controller to handle configuration and policy control and roaming.

Now Motorola has committed to smarter access points, too, with its WiNG 5 architecture. With a simple software update, all of the company’s access points will now run the same software package as Motorola’s controller appliance. Apparently Motorola’s access points have enough compute capacity to handle this new functionality.

Like every vendor that has pulled back the controller’s role in WLAN, Motorola says the speeds involved in 802.11n can lead to a bottleneck effect in the controller. Dr. Amit Sinha, Motorola’s WLAN CTO, said that backhauling everything to the controller isn’t practical, especially when it comes to voice and video communications.

In demos in Boston this week, Motorola showed that the access points are capable doing things traditionally reserved for its controllers. In one demo, an access point that was isolated from its controller was able to recognize and adjust to RF interference. In a second demo, the isolated access point was able to detect a rogue media server running unsanctioned streaming video over the wireless network and cut off the access to that server.

Finally, Motorola demonstrated that by making its access points smarter, it can boost performance. It streamed unicast streaming video from a single wireless access point to 80 laptops, which earned it recognition for a new record by an adjudicator from the Guinness Book of World Records.

What remains unclear to me: Why is Motorola keeping the controller at all. I know there’s a need for centralized configuration, policy and other management functions, but why does Motorola need to continue holding onto the standalone controller appliance. Can’t those management functions be run on an industry standard server or as a virtual machine? If the access points are able to run the same code-base as the controller, surely the access points can handle the data and control planes of the WLAN architecture on their own and leave the management plane to some simple software. Motorola probably has a good reason for this but I didn’t hear much from them about it during their announcement of the WiNG 5 announcement.


October 8, 2010  2:52 PM

Another look at Cisco’s network security strategy

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

We put Cisco’s security strategy under the microscope about six weeks ago after hearing from many, many networking pros who felt Cisco had lost its way, at least a little.  I think Cisco was hearing that message a little bit as well, because it focused heavily on its network security business this week with its latest round of Borderless Networks news. I received two separate briefings for this latest Cisco news cycle. The first briefing was a straightforward update on the various Borderless Networks products: the routers, switches, firewalls and software that make up the soup-to-nuts product portfolio.

The other briefing was strictly about Cisco’s security business. It was a WebEx panel led by Cisco’s security technology chief Tom Gillis and a coterie of marketing and product management folks. Unlike the first briefing, which was a one-on-one affair, this one was open to an unknown number of reporters and analysts who dialed in or made the trip to California to be there in person.

Gillis used this event to lay out Cisco’s current game plan for network security. The details of this talk didn’t make it into my Borderless Networks story this week, so I thought I’d lay out some of the basics here.

First, Gillis reviewed the state of Cisco’s security play. The company has an impressive footprint.

  • Cisco earned $2.2 billion in security revenue in its 2010 fiscal year, which represented a 14.5% growth rate over the previous year.
  • Cisco has 150 million VPN endpoint clients installed globally, and about 33% of them are the company’s new AnyConnect Secure Mobility client, a hybrid VPN/802.1X product.
  • Cisco’s Security Intelligence Operations (SIO) center, the company’s threat and vulnerability analysis lab, processes 20 billion URLs per day and has more than 500 security researches, analysts and rule writers distributed across the world.

Next, Cisco dug into the details for the biggest security piece to come out of this week’s news: The Adaptive Security Appliance (ASA) 5585-X. This firewall/IPS/VPN gateway box is Cisco’s first attempt to offer a product with the scalability and power to compete with the data-center class versions of Juniper Networks’ SRX platform.

In the past networking pros have told me that the ASA 5500 series is a decent product that lacks the firepower and scalability for high-end data centers. Cisco hopes the 5585-X answers those critics.  Although the Cisco folks didn’t name the SRX or Juniper during this briefing, they did keep referring to vendor “J,” whose product’s specs bore an uncanny resemblance to the SRX3600.

The 5585-X comes in a 2 RU format (about 40% of the size of SRX boxes with similar specs) and offers 20 Gbps of simultaneous firewall and IPS throughput, 350,000 new connections per second and 8 million total connections. Cisco also said it draws less power than the vendor “J” product (785 watts to 1,750 watts).

The ASA 5585-X should give enterprises the ability to scale up the number of AnyConnect clients they deploy. AnyConnect is a hybrid of a IPsec VPN and SSL VPN client and a 802.11X supplicant. Cisco says it can run on pretty much any device and enable enterprises to provide secure network access to employees, partners and suppliers, regardless of what device they are on and where they are. Since 33% of Cisco’s VPN client footprint has already upgraded to this product, which was released earlier this year, customers should already be discovering for themselves whether AnyConnect is truly able to provide them with an open yet secure network.

Cisco has focused its marketing efforts on a broad range of new markets in recent years (telepresence, Flip video cameras, smart grid technology, and servers), leading some networking pros to question its commitment to its bread and butter markets like routing, switching and security. This week proved to me that Cisco is at least listening to those customers who are worried.


September 27, 2010  2:05 PM

A “Big Three” in networking? Cisco, HP and… IBM?

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

Can you imagine a world where Cisco Systems wasn’t THE networking vendor… a world where Cisco shares top dog status with two other companies with the products, resources and support capabilities to compete on equal ground with the longtime industry leader?

Cisco has been top dog in the enterprise networking market for quite a number of years. You can attribute its dominance to a variety of factors. It generally produces good, reliable technology that its customers are comfortable with.  It generates tens of billions of dollars in annual revenue and is highly profitable, which means customers can rest assured that Cisco will be around for the long haul to support and advance its products.

Cisco also tends to stay ahead of the networking industry’s innovation cycle. It has the resources available to drop $1 billion on research and development for a new product line such as its Nexus data center switches. And where it doesn’t lead in innovation, it can pick up a competitor like the wireless LAN vendor  Aironet Wireless Communications.

No other vendor in the networking market has the ability to do all these things, at least in the North American market. There was a time, 10 years ago when companies like Nortel and 3Com seemed poised to bring Cisco down a notch, but neither company executed when they had the opportunity. Nortel collapsed and 3Com retreated.

For much of the last decade, most of Cisco’s competitor’s in the enterprise networking market have been spunky upstarts (Force10, Extreme, Enterasys, etc.) rather than multi-billion dollar industry giants.

Things are changing. HP, which has competed for years on the low-end of the networking industry with its ProCurve brand, acquired 3Com earlier this year. The deal was struck shortly after 3Com reinvented itself with its H3C brand of Chinese-developed high-performance networking products.  3Com was showing some promise with its new products, but at the time of the HP acquisition it had not yet succeeded in establishing a foothold in the market outside of Asia. Given its overall status as a gigantic, profitable IT vendor, HP now has the opportunity to compete with Cisco as a peer in the networking market… if it can executive its 3Com acquisition and convince Cisco customers to consider alternative vendors in critical parts of their networks.

Now we have indications that IBM is leaning toward a return to the networking industry. IBM made news today with its plans to buy Blade Network Technologies, a start-up which specializes in top-of-rack and blade chassis data center switches. It produces switches for both IBM and HP’s blade server chassis lines and it has a close relationship with Juniper Networks, itself an up-and-coming networking vendor which has a venture capital stake in Blade.

You may recall that in December IDC’s chief analyst Frank Gens predicted that IBM would buy Juniper in 2010. This Blade Network Systems acquisition would appear to bring Juniper and IBM closer together than ever before. If Gens’ prediction of an IBM-Juniper marriage comes to pass, networking pros will suddenly find themselves in a position they’ve never been in before: A world where three of the world’s largest technology vendors all have well-regarded portfolios of enterprise networking products. Cisco, IBM and HP.

There’s an old adage in the industry that no one ever got fired for buying IBM. In recent years the term has been adapted by Cisco customers, many of who say “No one ever gets fired for buying Cisco.” Could networking pros soon find themselves saying: “No one ever got fired for buying switches from IBM, HP or Cisco?”


September 21, 2010  2:29 PM

Computer networking babe goes ballistic … or gets kind of annoyed

rivkalittle Rivka Little Profile: rivkalittle

Disclaimer: This author attended a snooty liberal arts college and may have registered for one too many womyn’s studies classes.

A couple of recent blog entries by my colleagues at Network World Cisco Subnet warmed my heart: “Special Cisco Live Contest – Hottest Booth Girl” by Michael J. Morris, and: “Who’s the hottest video game chick?” by Jimmy Ray Purser.

The first closely examines which of the booth babes at Cisco Live was hottest, while the second goes a bit deeper, exploring the relationship between a father and son (Purser and his boy) through the scope of which video game vixen each finds sexier (i.e. The family that lusts together stays together).

I fully support my Network World colleagues in conducting such in-depth analysis of networking technology and its implications on network engineers. Great work, guys.

I also thank them for highlighting just how welcome women are at networking technology conferences and other forums for serious technology discussion.

Last year I attended LISA Usenix in Baltimore, which was easily the best conference I had attended all year. It was populated by long-haired, academic engineers rather than Docker-wearing product marketers (read Interop). In my first two hours at the conference, I had three in-depth conversations: one about open source network management tools, another about open source community tactics in networking technology and a third about where The Clash went wrong. I was in heaven.

But when I walked into lunch the first day, I was also struck by the very same problem that hits me at every tech conference I attend: I could count the number of women in a room of around 200 on one hand. At least at LISA there were no half-naked chicks selling switches or promoting firewalls, and as a result, I felt welcome to discuss, contribute and learn. But it is very hard to feel like a serious participant at a conference like Cisco Live where (according to Morris’ blog) BlueCat Networks thought it best to have two girls in spandex explain IP Address Management, while NetOptics went the route of selling network management tools through a girl in short-shorts and a sailor cap.

In a response to the Cisco hot chicks blog, one of Morris’ readers sums it up perfectly:

We encourage women to train for high-tech careers, and this degrading attitude doesn’t help,” the reader wrote. “Your editorial staff should consider your liability for a civil rights lawsuit, for creating a hostile work environment for women.”

As a journalist, freedom of speech is my religion, so I wouldn’t go as far as a lawsuit. But I do wonder why we journalists so easily waste this freedom – and further perpetrate a culture that ultimately shuts out women and the innovation they could contribute.

Then again, these two blogs may have had less to do with sexism and more to do with a desperate – and clumsy – grab for attention. In fact, I think my editorial director Susan Fogarty said it best in a quick email regarding the blogs, “This is just playing to the lowest common denominator and stirring the pot to spike up the page views.”

I choose a different path. I’ll take my page views from actual networking technology coverage and use my right to speak for things that matter.

In the meantime, we have a duty to work toward an IT community that fosters growth for women and encourages their contributions.


September 17, 2010  3:18 PM

Forrester’s Zero Trust security model

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

Forrester Research Inc. is proposing a new mantra for IT security. In his report “No More Chewy Centers: Introducing the Zero Trust Model of Information Network Security,” Forrester analyst John Kindervag suggests that we should dispense with the old Reaganism “Trust but verify” and replace it with “verify and never trust.”

[kml_flashembed movie="http://www.youtube.com/v/As6y5eI01XE" width="425" height="350" wmode="transparent" /]

Forrester says enterprises must adopt a zero trust security model because there is no reason to ever trust any packets that are passing over a network. Packets aren’t people. You can’t look at them as say, “That’s a packet that I have faith in not to betray me.” Malicious insiders and incompetent insiders alike are both real threats that no hardened perimeter can protect against. There is always the potential for a person to abuse the access they have to network resources or to be negligent with their access to those resources. You can’t afford to lower your guard on your network just because someone has presented the proper credentials for getting on a network.

So what is a zero trust security model? Forrester is promising to roll out subsequent reports that detail the architecture it has in mind and some case studies from enterprises who have adopted something like it. This first report mostly argues the case for why enterprises should consider this security approach. It’s one of the more entertaining reads I’ve had with analyst research.

In the meantime, Kindervag lays out some basic concepts:

  • Use network access control (NAC) technologies to manage access to network resources tightly. Specifically, Kindervag says enterprises should consider role-based access control features from NAC vendors. Use this and other technologies to strictly enforce access privileges, giving users only the minimum of access they need to resources.
  • Even then, enterprises can’t assume that people won’t abuse or be careless with the access privileges they have. Traffic must be logged, and better yet, inspected. This requires more than the log management capabilities many security professionals use. Network analysis tools that are capable of seeing and analyzing network flow technologies like Netflow and sFlow are also critical to giving network security pros a real-time view into what’s happening on their networks.

Kindervag writes that this approach will lead to more collaboration between networking pros and information security pros, because infosec folks are going to to be using the network more actively than they have in the past to monitor and secure the enterprise. NAC products and network analysis products are often implemented on the network and managed by networking teams rather than security teams, so these two groups will have to come together more than they have in the past.

I’m hearing echos in the zero trust model of what Cisco has talked about recently with its Borderless Networks strategy. That strategy is very much a network story, about providing access to network resources for users regardless of where they are, what devices they are using and how they connect those devices to the network. First and foremost this is a networking strategy for Cisco, but security is a critical piece. Cisco is aligning its security products so that network and security teams can make this ubiquitous access vision secure. I talked in depth about this concept with Cisco in my recent story on Cisco’s overall security strategy.

The concept is also relevant to other security trends we’re seeing right now. For instance, there’s a lot of chatter about the future of firewalls… about so-called next generation firewalls. Vendors like Palo Alto Networks have built firewall products that don’t rely on ports and protocols to determine whether to allow or disallow traffic in and out of a network. Instead they are building Layer 7 inspection engines that can identify traffic by application. Suddenly all those Port 80 apps that look like simple Web traffic to older firewalls are identifiable as YouTube, peer-to-peer sites and Facebook.

The concept of deperimeterization — that a secure perimeter just isn’t good enough — has been bouncing around for years now. This zero trust model seems like a logical evolution of it. It’s a nice articulation of how enterprises need to adjust their mindset toward security fundamentally. Not only is the perimeter no long the best line of defense. There is no single line of defense. You need to protect everything on your network everywhere on your network from everyone on your network.


September 8, 2010  4:19 PM

Some more advice on using commodity switches at the LAN access layer

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

Last week I wrote a story about how some enterprises save money by using commodity network switch vendors at the access edge of their local area networks. These low-cost vendors use merchant silicon and build basic-functionality switches to keep their costs low. While reporting this story, I emailed several questions to Bjarne Munch, an Australia-based principal research analyst with Gartner. Munch was on vacation at the time and was unable to respond to my questions until now. I’ve pasted my questions and his answers below.

1. You advocate that enterprises save money by using Layer 2 switches wherever possible. In what scenarios would an enterprise want to have layer 3 routing on their edge/access switches?

I would say not very often, but in cases with a high degree of VLAN segmentation there may be a need for routing in the access for some more distributed network design. Or in cases where the Layer 2 functionality do not offer sufficient QoS, this could be situations with high use of both voice and video from the desktop.

2. You mention that enterprises generally don’t need Gigabit Ethernet to the desktop. In what situations would you say enterprises should pull Gigabit all the way to the desktop?

If you add bandwidth needs for a typical enterprise user and incorporate UC and Video you will not even get close to 100M to the desktop. Some enterprises with CAD/CAM such as city planning or architects may have higher bandwidth needs or in the medical area with X-ray images. But this is a niche which is typically easy to identify.

3. You mention that enterprises can drive costs down even further with commodity switches by adopting automation for operational tools. Could you elaborate on this further?

A large percentage of the ongoing cost is labor-based, i.e. time based on configuring or trouble shooting. For larger networks operational tools that can automate these processes can thus save time and thus reduce the ongoing operational costs, i.e. bring down the TCO.

4. You talk about using fixed-format switches over modular ones where possible to drive down costs. In what kinds of situations will enterprises be required to deploy modular switches at the edge?

Most cases I have seen have been just in case investment where the enterprise was not sure of needs so they chose a modular switch partly for switch port expansion but also for housing of other functions such as WLAN controller

5. These low-cost vendors use merchant silicon instead of ASICs to keep costs low. What exactly is the value of those ASICs? What are enterprises losing by deploying switches with merchant silicon at the edge?

There is some loss of performance by using merchant silicon and there may also be some degree of performance variations depending on traffic load but for most enterprises this is not really an issue within the edge of the network.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: