The Network Hub


November 5, 2018  12:14 PM

5 questions to ask about open data centers

Zeus Kerravala Profile: Zeus Kerravala
API, Data Center, data center and networking, Data centers, Extreme Networks, Networking, Open source

Editor’s note: In this opinion piece, industry analyst Zeus Kerravala shares his thoughts on open data centers and Extreme Networks’ approach. Extreme Networks is a client of Kerravala’s ZK Research, a consulting firm based in Westminster, Mass.

Open is an overused word in the networking industry. Every vendor claims to be open. The degree of openness can vary greatly from vendor to vendor, however. In theory, having one open API is technically open. An engineer or developer can’t do much with a single API, but it is open.

Consider mobile phones, for example.  Apple’s interfaces are open, but they only let developers and partners access the features Apple enables — and the rest of the phone is locked down. This is also true in networking. As a result, network professionals need to do their homework and make sure they ask the right questions about open data centers.

The terms “open” and “standards-based” are often used interchangeably, but they are two different things. A vendor can be open and standards-based or closed and standards-based. Similarly, the vendor can be proprietary and open or proprietary and closed. For example, Apple’s open interfaces are proprietary, as is Windows. Linux is open and standards-based, so anyone can work on it.

Network professionals considering a network refresh in the data center are likely to hear the word open tossed around a lot. But networking pros need to understand how open data centers and vendors actually are. I would start by asking the following questions:

  • Is your service multi-vendor?
  • Which vendors do you support?
  • Do you support just network vendors or can you support other technologies, such as application delivery controllers and security appliances?
  • How big is the community of developers that uses your software products?
  • How easily can I interface with other users of the product?

Open source tool enables automation

One networking vendor, Extreme Networks, unveiled this week a data center product with an emphasis on open. Extreme’s new Agile Data Center comprises several products, including Extreme Workflow Composer, Embedded Fabric Automation, Extreme Management Center (XMC), ExtremeAnalytics and two new hardware platforms.

This is the first major data center announcement for Extreme since it acquired Brocade’s data center assets in 2017. Agile Data Center combines Brocade and Extreme technologies.

Instead of relying heavily on APIs for network programmability and automation, Extreme Workflow Composer is a network automation tool based on StackStorm, an open source platform designed for runbook automation. It’s based on DevOps automation and focuses primarily on running workflows.

Extreme could have built Workflow Composer as a proprietary tool. Instead, it opted for an open source product, which is familiar to engineers, and provides an ecosystem that’s creating integration packs.

Multi-vendor network integration

Extreme’s definition of open essentially means no vendor lock-in. WorkFlow Composer can automate workflows across any vendor, including Arista, Cisco and Juniper. Extreme can integrate with more than 100 vendors that have integration packs on exchange.stackstorm.org. Customers may have to tweak the code some, but they do not have to start with a blank sheet of paper.

StackStorm extends beyond networking, too. As a result, engineers who use Workflow Composer can extend the automation capabilities to things like Palo Alto and Check Point firewalls, VMware vSphere, ServiceNow’s service desk and others.

You could argue the network is the foundation of a modernized data center as it provides the connectivity fabric between everything. But open data centers incorporate more than just networking. By building Workflow Composer on StackStorm, Extreme can orchestrate and automate workflows from the network to the application — and everything in between.

Getting its foot in the door

Additionally, XMC is also designed for multi-vendor management with a goal of helping customers transition from an old data center to a modernized one without having to rip and replace. Extreme’s strategy here is not to be magnanimous, but rather it’s a strategic bet.

Despite being close to $1 billion in revenue, Extreme is still a minority-share player in networking. Don’t expect a customer to take out its existing infrastructure and replace it with Extreme overnight.

A better approach is to help customers manage their environments and when the network devices come up for renewal, compete for the business then. This approach is similar to the one Aruba Networks took when it was a startup with its AirWave management tool. At one time, it managed Cisco access points (APs) better than Cisco’s own tools. This strategy helped Aruba get its foot in the door of many companies, which it eventually parlayed into the AP business.

October 24, 2018  1:09 PM

Expect slow adoption of 400 Gigabit Ethernet, for now

Zeus Kerravala Profile: Zeus Kerravala
Ethernet, Networking

Editor’s note: In this opinion piece, industry analyst Zeus Kerravala shares his thoughts on 400 Gigabit Ethernet adoption and Arista Networks’ 400 GbE approach. Arista is a client of Kerravala’s ZK Research, a consulting firm based in Westminster, Mass.

Speeds of 400 Gigabit Ethernet might seem futuristic. Currently, however, some use cases make sense. For example, advanced applications — such as artificial intelligence, virtual reality and serverless computing — require three components: fast networks, storage and compute. If any of these three components falters, the application won’t work optimally.

The rise of GPUs as a data center resource has seen compute speeds grow exponentially. Flash storage and NVMe have given storage performance an exponential jump. And 400 Gigabit Ethernet will enable the network to keep up.

As an example, I recently chatted with a data scientist at a healthcare group in the Boston area. He told me his organization’s biggest AI challenge is ensuring GPUs are fed enough data to keep them busy.

In this case, a next-generation server helps. But the organization also needs a network that can send the volume of data to increase the use of the GPUs to near peak. Over-investing in any one area wastes money. So, the three legs of the application stool must be kept in lockstep.

400 Gigabit Ethernet needs solid support system

Despite the potential and product availability, I believe 400 GbE adoption will be light — initially. Then, in 2020, we’ll see growing interest, following a similar pattern as 100 GbE.

The technology is not the problem. Early adopters, such as web-scale companies, will likely adopt 400 GbE as soon as it’s available because they deploy all technology this way.

For the rest of the world, though, adoption of 400 GbE networking requires greater ecosystem support. For example, the technology needs better availability and lower price optics, cabling and server connectors.

Next year will bring 400 GbE products to market, but it won’t deliver the large ecosystem required for mass adoption. Also, over the next 12 months, the price of 400 GbE, particularly the optics, will fall — making it more affordable for everyone.

A network that operates at 400 Gigabit Ethernet may seem like overkill. But, if I’ve learned one thing in nearly 40 years in this industry, no matter how much bandwidth is available, we find a way to consume it. Even if 400 GbE is not right for your business today — because of price or other factors — you should still educate yourself on the different options so you can make the right decision when the time comes.

Network needs to keep up with other trends

One networking vendor, Arista Networks in Santa Clara, Calif., provided some details this week on its 400 Gigabit Ethernet roadmap. The vendor’s new 7060X4 Series offers 32 ports of 400 GbE in a 1 rack unit chassis. The products are based on Broadcom Tomahawk 3 silicon that offers 12.8 Tbps of switching capacity.

Customers that deploy the switch have the option of splitting each port into four 100 GbE ports for a total of 128 100 GbE ports. Network managers can deploy them as 100 GbE today and migrate to 400 GbE, if required.

In 2015, Arista introduced its 7060X with 32 ports of 100 GbE enabled by 128 lanes of 25 Gig serializer/deserializer (SerDes). In 2017, the 7260X3 Series brought 64 ports of 100 GbE using 256 lanes of 25 Gig SerDes.

Now, in 2018, the 7060X4 has 32 ports of 400 GbE on a switch with 256 lanes of 50 Gig SerDes. This evolution represents a fourfold increase in capacity in about four years. Additionally, the 7060X4 Series features new traffic management and load balancing capabilities.

Given the growth in data and bandwidth use, this kind of Moore’s Law performance is crucial for the network to keep up with other tech trends, such as GPUs and flash storage.

A debate over optics

Arista’s 400 GbE series is available in two configurations that are virtually identical except for the optical connectors they support. The 7060PX4-32 model uses OSFP optics, and the 7060DX4-32 supports QSFP-DD optics.

Currently, in the networking industry, there’s a debate as to which optic is “better” — and the answer depends on the customer. QSFP-DD is backward-compatible with QSFP-100 optics, making it ideal for customers who want to migrate slowly from 100 GbE to 400 GbE.

From a technology perspective, OSFP optics are better because they cool easier, consume less power and have more options. OSFP connectors support 100km standard mode fiber, making it well-suited for data center interconnect, while QSFP-DD does not provide this support.

Arista does offer a passive OSFP-to-QSFP adapter, enabling customers to deploy a 400 GbE switch today and right it at 100. Ultimately, though, customers make the choice on whether they want better rear-facing features for ease of migration or better forward-looking ones.

Personally, I favor the OSFP connector because the type of organization that would deploy 400 GbE today would benefit from its additional capabilities. I understand the appeal of the QSFP-DD connectors. But, migrating slowly to 400 GbE seems to be an oxymoron.


October 16, 2018  11:22 AM

Merging wired and wireless networks spawns valuable analytics

Zeus Kerravala Profile: Zeus Kerravala

Editor’s note: Industry analyst Zeus Kerravala provides his thoughts on Arista Networks’ effort to unify wired and wireless networks. Arista is a client of Kerravala’s ZK Research, a consulting firm based in Westminster, Mass.

Anyone who has used a wireless device has likely experienced a scenario where the device was connected to the access point but no network services worked. Or perhaps the device was connected, got booted off, and the user couldn’t re-establish connectivity. These problems have been around as long as Wi-Fi and can affect worker productivity and company revenue.

In the past, Wi-Fi flakiness was annoying, but it wasn’t business-critical because wireless was considered a network of convenience. Today, however, that has changed. Many workers need Wi-Fi to do their jobs because roaming around a campus has become the norm.

Also, Wi-Fi-connected IoT devices have proliferated. Consequently, wireless network outages or performance problems will result in key business processes not functioning properly.

Network administrators have a hard time troubleshooting Wi-Fi problems. A recent ZK Research survey found many network engineers spend about 20% of their time troubleshooting Wi-Fi issues. Often the problem disappears before it’s fixed. But the root cause is still there, and the issue will likely re-emerge.

The Wi-Fi network is now mission-critical and arguably as important as the data center network.

Data center and campus edge come together

Networking vendor Arista Networks, based in Santa Clara, Calif., is looking to address Wi-Fi issues. The company announced this week its Cognitive Campus architecture — a suite of tools that unifies wired and wireless networks by applying a software-driven approach to the campus. To date, Arista has found most of its success by selling its products into data centers.

Cognitive Campus sheds some light on Arista’s planned acquisition of Mojo Networks. Earlier this year, Arista said it would acquire Mojo, a company that sells its products at the campus edge, signaling it wants to be a bigger player in the enterprise networking market.

Arista has other campus products, but they’re targeted at the campus core where the requirements are similar to the data center. As a result, Mojo is Arista’s first true campus edge offering.

Specifically, Arista is looking to use Mojo’s Cognitive WiFi to remove traditional bottlenecks created by Wi-Fi controllers. Traditional Wi-Fi products have focused on ensuring connectivity rather than understanding application performance or client health.

Cognitive WiFi — combined with Arista’s CloudVision management suite — looks to provide better visibility into network performance so network engineers can identify the source of a Wi-Fi problem before it affects business. Arista has integrated the wireless edge information into CloudVision.

Mojo’s management model disaggregated the control and data planes so its cloud controller only handles management and configuration updates. If the access points (APs) lost the connection to the controller, the network would continue to operate. Most other APs would stop working if controller connectivity was lost.

As part of Cognitive Campus, Arista can aggregate data from the wired network and combine it with wireless data to perform cognitive analytics across the network.

The importance of analytics

Arista’s planned acquisition of Mojo left some industry observers puzzled. On the surface, a data center and the wireless edge don’t have much in common.

However, the intersection of the two spawns a treasure trove of data. As a result, analytics of the information can be used to transform the network. Arista’s Cognitive software brings some visibility and intelligence to the campus network.

Network professionals should rethink network operations and embrace the analytics and automation entering the campus network.

For the past five years, my advice to engineers has been: If you’re doing something today that’s not strategic to your company or resume, don’t do it, and find a way to automate it. Wireless connectivity and performance issues are excellent examples of this advice.

I’ve never heard of engineers getting hired because they were really good at solving problems that shouldn’t happen in the first place. Focus on software skills, data analytics and architecture, and understanding the user experience. Those skills are required in the digital era.


October 3, 2018  1:48 PM

Palo Alto Networks adds cloud analytics with RedLock acquisition

Zeus Kerravala Profile: Zeus Kerravala
Cloud analytics, Cloud Security, Networks, Palo Alto Networks, Public Cloud

It’s fair to say the cloud has become a core component of most organizations’ IT strategies. The growth of public cloud services — such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) — has been remarkable as application developers and IT professionals look to simplify the way they work and speed up deployment times.

As is the case with most things, however, for every yin there’s a yang. The cloud does have a dark side. In this case, it’s security. When I think of the effect the cloud has had on network and security teams, I’m reminded of the scene in Shakespeare’s Julius Caesar when Mark Antony shouts, “Cry ‘havoc!’, and let slip the dogs of war.”

Cry havoc indeed. The tight control IT used to have on the environment is now gone as public cloud services are largely dependent on the internet for transport, making workloads easier to breach than if they were in a tightly controlled data center. Some of the more common challenges when using public clouds include not having visibility across multiple clouds, lack of centralization, keeping up with compliance mandates, detecting threats and responding fast enough.

These problems have given rise to many security startups aimed at solving a piece of the cloud security puzzle. This, of course, introduces new challenges as securing the cloud requires some manual correlation of data, which can be slow, time-consuming and inaccurate and leads to gaps in coverage. Cloud security is much like a puzzle that hasn’t been put together. All the pieces are there, but it takes a lot of effort to get the full picture.

Palo Alto RedLock acquisition automates cloud security

Palo Alto Networks is trying to simplify the process of securing the cloud. Coming into 2018, the company had solid network and endpoint security products. In March, it added cloud security vendor Evident in a $300 million acquisition. Evident brought a rich set of cloud compliance capabilities to the Palo Alto platform.

This week, Palo Alto announced its intention to buy cloud threat defense vendor RedLock for $173 million. RedLock’s strength is its analytic and automation capabilities that help network and security teams replace manual inspection of network traffic with automated, real-time remediation.

RedLock’s products capture detailed events in all major public cloud platforms to quickly see and fix threats. The cloud vendor correlates resource configurations with network traffic and third-party feeds to expose vulnerabilities and identify compromised accounts and find insider threats via analysis of user behavior. The product then automates remediation by integrating into existing incident response workflows.

For example, if a developer accidentally leaked cloud access keys on a site such as GitHub, a hacker could steal them and break into the cloud environment using those keys. RedLock’s analytic engine would recognize the key was being used in a strange location to do unusual things and immediately alert the security team with a full history of activities associated with that key.

Analytics and security go hand in hand

I can’t overstate the importance of analytics in an organization’s security strategy. Simply too much data comes in from too many sources to be analyzed manually. Highly skilled network and security professionals might have been able to do things manually in the past, but today it’s just not possible. Analytics and automation should be viewed as an engineer’s best friend as it can greatly augment skill sets.

The addition of RedLock brings Palo Alto customers some benefit today. But, over time, the company plans to integrate the two platforms, creating a “1+1 = 3” scenario. The combination of Evident and RedLock brings the following capabilities to Palo Alto customers in a single platform:

  • Continuous discovery and inventory of public clouds via a centralized dashboard that shows assets across AWS, GCP and Azure across multiple accounts and regions;
  • Real-time compliance reporting for industry standards such as NIST, PCI, HIPAA, GDPR and CIS. Customers can access customized reports with a single click;
  • Ability to prioritize vulnerabilities, detect cloud threats and investigate incidents in minutes, as well as provide automated remediation of security, risks and policy violations across all major clouds.

Additionally, Palo Alto’s other products can be used to protect other parts of the cloud ecosystem. For example, its VM-Series products protect and segment cloud workloads, and Traps secures operating systems and applications within workloads.

Cloud security doesn’t have an “easy button” because it requires multiple products to protect the different areas of the cloud. The addition of RedLock brings rich analytic capabilities, enhancing Palo Alto’s already-robust cloud security portfolio that now offers protection and compliance across the entire public cloud journey.


May 1, 2017  5:01 PM

Cisco to acquire Viptela

Chuck Moozakis Chuck Moozakis Profile: Chuck Moozakis
Cisco, SD-WAN

Cisco said it would acquire SD-WAN vendor Viptela Inc. in a deal valued at $610 million. The transaction, expected to close later this year, will fortify Cisco’s existing SD-WAN portfolio with additional cloud-based services, the company said in a blog post.

“Viptela’s technology is cloud-first, with a focus on simplicity and ease of deployment while simultaneously providing a rich set of capabilities and scale,” said Scott Harrell, Cisco’s senior vice president of product management, Enterprise Networking Group, in a statement. “With Viptela and Cisco, we will be able to deliver a comprehensive portfolio of comprehensive on-premises, hybrid and cloud-based SD-WAN solutions.”

Viptela, based in San Jose, Calif., emerged from stealth three years ago with an SD-WAN framework it called the Secure Extensible Network (SEN). The platform includes physical vEdge routers that form a secure data plane; a central controller runs on an x86 server–either on-site or in the cloud–and orchestrates connectivity among the routers. Viptela sells its framework to enterprises and its technology is also used to underpin managed SD-WAN services offered by a number of providers, including Verizon and Singtel.

SD-WAN continues to be a hot market, with revenues expected to eclipse $6 billion by 2020, according to IDC.

Cisco said the acquisition will dovetail with its Digital Network Architecture strategy to support software-driven networks that are more programmable, responsive and dynamic. Viptela will join Cisco’s enterprise routing unit within the vendor’s Networking and Security Group, led by Senior Vice President and General Manager David Geockeler.


January 16, 2017  12:38 PM

Are MPLS network deployments in decline?

Robert Sturt Robert Sturt Profile: Robert Sturt

There has been a fair amount of credence given to some media statements that are suggesting MPLS network services are decreasing in popularity. 

I believe WAN technology is becoming less about products but more about capability due to the rise of cloud services and the use of the Internet.

A few years ago, the default IT Management decision was split between companies with a direct interest in private services (MPLS) and companies with an interest in public services (the IPSec VPN). I appreciate this is kind of view is over simplifying things but you get the idea.

Today, IT capability is becoming complicated. The hard part is figuring out how to service the unbelievable technology we hold in our hands together with the resources on offer from cloud vendors.

When I say, unbelievable, I really do mean it in the true sense of the word.

The latest phones have processing power comparable to desktop PCs of only a few years ago. When these devices are coupled with access to applications which reside on both the corporate infrastructure and the Internet, they become one of the most valuable devices within the Enterprise.

If you read various telecoms publications, experts believe the future is mobile, I think they might be right. (Note, clearly laptops remain very relevant in combination with tablets and phones.)

The way we work is directly taking us down a path of network design and architecture which is more flexible.

MPLS vs SD-WAN

Perhaps the biggest challenger for proponents of MPLS networks is SD-WAN. SDN promises to free the Enterprise from restrictive, private MPLS by offering granular traffic prioritisation, security and privacy within a single box or application.

The fact remains, careful consideration must be given to the underlying connectivity used with SD-WAN. There is no doubt the Internet is a scaled platform vs even a few years ago but IT teams must still consider the laws of physics. In other words, the distance between locations on a global network coupled with the use of multiple ISP backbones could degrade performance.

Fig 1 is an example of an SD-WAN network deployed over multiple ISP backbones.

MPLS Network

With this said, regardless of the actual product, analysis of your specific requirements vs the provider’s capability remains as important as ever. If your organisation begins by asking the right questions, the answers will determine if any WAN product is fit for purpose, this applies to both MPLS, VPLS and SDN.

First, the top 3 reasons why MPLS remains relevant.

1. MPLS VPN services are delivered across private infrastructures, IPSec and other encryption services are not required.

2. MPLS QoS (Quality of Service) provides the Enterprise with an ability to prioritise applications including real time traffic – Voice, Video – and other mission critical apps such as Citrix.

3. Service level agreements which include latency, jitter, uptime and other performance factors are generally more focussed across private based infrastructures.

And now the top 3 reasons why SD-WAN services are of interest.

  1. SD-WAN typically leverages the Internet to deliver secure, highly flexible, encrypted services to  any form of device and connectivity.
  2. The promise of SD-WAN services offers complete control and flexibility via easy to use software driven portals.
  3. The innovation of SDN means the Enterprise should see new features and enhancements released on a much more regular time frame vs MPLS network products.

Is a hybrid WAN the future?

The typical network today is not generally based on a single product or service. In fact, the majority of deployments consist of core MPLS network connectivity between key offices with SD-WAN (or DMVPN) connectivity over the Internet for smaller offices and remote users. Data Centre and hosting facilities are connected via layer 2 VPLS or point to point / multipoint connectivity.

(Architecture and WAN design is out of scope here).

As I have previously mentioned, the requirements for Enterprise business is not simply based on one platform. The result is generally a hybrid. There are exceptions here, a national or well scaled global architecture could be delivered over a single SD-WAN deployment but as we mentioned earlier, careful thought must be given to application performance in terms if packet latency.

I have personally worked with one organisation in the US where their platform is based on hybrid connectivity. The circuit is delivered as a point to point Ethernet into the provider’s network but an intelligent device allows the business to decides what the circuit should become, i.e. layer 2, layer 3 or even Internet. (If you would benefit from knowing the provider name, drop me a message.)

Conclusion

Everybody has an opinion on whether MPLS is in decline. I tend to take the view that actually ‘private’ based connectivity will always be a requirement for Enterprise business, even just from the perspective of privacy. So, no, MPLS is not doomed and should remain an essential part of the tool-kit.

I do believe that SD-WAN will seriously erode the popularity of MPLS VPN as the default WAN type, especially for organisations where they are able to predict performance of their Internet connectivity.

If you are either deploying SD-WAN over a single IP backbone or multiple ‘known’ backbones, there is every possibility SD-WAN could be the only technology required depending on your view point.


December 7, 2016  2:03 PM

Ethernet switch market up 2%: IDC

Chuck Moozakis Chuck Moozakis Profile: Chuck Moozakis

The worldwide Ethernet switch market grew 2%, racking up revenues of $6.29 billion in the third quarter, according to IDC’s Worldwide Quarterly Ethernet Switch Tracker and Worldwide Quarterly Router Tracker reports.

Router revenues, meantime, rose 2.6%, to $3.56 billion as enterprises and service providers beefed up their infrastructures.

Cisco continued to see erosion in its Ethernet switch market share, IDC said, with the vendor now capturing 57% of the market, down 6.5% from Q3 2015 totals. Hewlett Packard Enterprise’s switch sales also fell, but Juniper Networks and Arista Networks both saw increases in Ethernet switch sales, with Arista notching a 31.5% hike in revenues year over year. Huawei’s Ethernet switch sales almost doubled in the period; the Chinese vendor now has 7.2% of the switching market.

“Recent macro-economic developments and maturing IT architectures have led to a spectrum of reactions by IT decision-makers across the regions with regard to Ethernet switching investments in 3Q 16,” said  Rohit Mehra, IDC’s vice president of network infrastructure, in a statement. “Strong growth in the 40 GbE and 100 GbE segments specific to data center deployments brought a degree of stabilization to a market in transition where the enterprise campus market for switching declined.”

40 GbE switch sales grow

IDC said 10 GbE switch sales dropped 1.3%  year over year, to $2.22 billion, while 40 GbE switch revenue jumped 20%, to $756.4 million. The two Ethernet switch market standards are now being joined by an emerging 100 GbE switch market, which saw a tripling in revenues on an annualized basis in the third quarter of the year. One-GbE switch revenue dropped 4.3% year over year, IDC said.

The increase in router sales was sparked by an 8.2% increase in enterprise routing, IDC said, cautioning that the market bears close review as more companies evaluate the use of new SD-WAN technologies.

“Software-defined network architectures and network transformation for the digital economy are among the factors shaking up the core network infrastructure segments,” said Petr Jirovsky, IDC’s research manager, Worldwide Networking Trackers, in a statement.


November 2, 2016  3:11 PM

Broadcom buys Brocade in $5.9B deal

Chuck Moozakis Chuck Moozakis Profile: Chuck Moozakis

Broadcom Ltd. Nov. 2 said it would acquire storage and networking supplier Brocade Communications Systems Inc. in a deal valued at $5.9 billion.

Chip-maker Broadcom said it will keep Brocade’s Fibre Channel and storage area networking line but will sell the company’s IP networking business, which includes routing, switching and wireless technologies that it just recently acquired from Ruckus Wireless.

“This strategic acquisition enhances Broadcom’s position as one of the leading providers of enterprise storage connectivity solutions to OEM customers,” said Hock Tan, Broadcom’s CEO, in a statement. “With deep expertise in mission-critical storage networking, Brocade increases our ability to address the evolving needs of our OEM customers. In addition, we are confident that we will find a great home for Brocade’s valuable IP networking business that will best position that business for its next phase of growth.”

Since acquiring Foundry Networks eight years ago, Brocade has struggled to carve a significant niche in the enterprise networking market. Broadcom is selling the IP business in part so that its current relationship with networking customers that buy its chips–which include Cisco and Juniper–won’t be imperiled.

Broadcom said the transaction is expected to close in mid-2017.


September 30, 2016  2:58 PM

Looking at the future of networking, Reddit style

Eamon Earls Profile: Eamon Earls

What does the coming decade hold in store for networking’s future?

That’s the question recently posed by a contributor on Reddit’s r/networking enterprise networking forum.

The answers: Advances in IP networking, mesh networking and SDN were among the most common predictions.

User dm18 mentioned shifts in networking that would favor software and IoT. Some of the possible changes proposed might include self-organizing networks without the need for manual configuration, cloud-based systems to automate threat response, patch management and backup. The same user projected widespread interconnection, with outdoor access points powered by built-in batteries and solar panels and interconnections between access control, cameras, climate control, lights, firearms, facial recognition and appliances.

Device density might necessitate more organized networks and eliminate large segments of home networking as telcos and large IT companies like Google step in to provide free, city-wide wireless. Managing huge quantities of data—perhaps transmitted wirelessly—might mean a new emphasis on data compression.

Some users offered up networking humor in response to the question about the future of computer networking. “In 15 years, all network gear (switches, routers, etc.) will have built in Jet Packs so that they won’t need a rack, they will just hover on jets in the designated space,” one user commented. However, others struck a more serious note, suggesting widespread mesh networking and SDN fully fulfilling its promises by 2031.

Networks will automate

“Large-budget networks will automate; small-budget networks won’t,” said user jiannone, looking to networking’s future. “Small-budget networks [will] get by on branded whiteboxes [sic] with licensing and support fees attached to low cost, high-enough throughput boxes that are more ASIC than general CPU architectures,” the user added, suggesting that today’s entertainment infrastructure may shift to IP.

User patchate brought up PCI express switching in the discussion of the future of computer networking, “It doesn’t offer any compatibility with current Ethernet-based technologies, but the underlying technology seems sound to me, at least for short-range interconnects.” The user added, enthusiastically, “If ToR switches could be replaced by PCI express lanes with CPUs having DMA access to any device installed within a rack, that would be so very incredibly awesome.”


September 14, 2016  12:56 PM

Extreme buying Zebra’s WLAN biz for $55M

Chuck Moozakis Chuck Moozakis Profile: Chuck Moozakis

Extreme Networks Inc. said it will purchase Zebra Technologies Corp.’s wireless LAN business for $55 million in cash to bolster its existing WLAN portfolio.

The transaction is expected to close later this year pending closing conditions and regulatory approvals.

In a blog post, Extreme CEO Ed Meyercord said Zebra’s WLAN products will be meshed with the company’s ExtremeWireless product line. Among Zebra products are a series of new access point offerings, which include a wall plate and tri-radio APs. Zebra’s wireless intrusion prevention system will also be integrated within Extreme’s products. Zebra is also known for its NSight visibility and analytics tool.

“WLAN is the fastest growing segment in the networking industry,” Meyercord said in a statement.” Our heritage of delivering innovative and pioneering technology is reinforced with today’s announcement, underscoring our commitment to providing customers worldwide with unified visibility and control across their wired and wireless networks.”

Farpoint Group principal analyst Craig Mathias said the deal will provide additional technological heft to Extreme, which will also inherit products Zebra acquired from its 2014 purchase of Motorola Solutions’ enterprise group. “They have a strong customer base and perhaps even some useful products at a bargain price,” he said.

Extreme’s purchase of Zebra’s WLAN operations comes as networking systems vendors continue to snap up wireless vendors. In the last four years, Cisco acquired Meraki, Hewlett Packard Enterprise purchased Aruba Networks, Fortinet bought Meru Networks and most recently, Brocade plunked down $1.2 billion to purchase Ruckus Wireless. Dell-EMC, meantime, struck an agreement with Aerohive Networks earlier this year to consolidate some of its products and to resell others in a bid to extend Dell’s enterprise switching business.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: