If you are currently considering initial investigation into SD-WAN procurement, you’ll know how many moving parts are involved. In this article, I’ll discuss 17 features and benefits together with 10 vendors which IT teams may wish to consider when buying SD-WAN solutions for U.K., U.S. and global businesses.
I’ve also written a more exhaustive top and best SD-WAN solutions article if your business is looking at building an up-to-date vendor list of possible SD-WAN providers for U.K., U.S. and global services. There’s also a Mindmap, which displays each vendor together with a checklist.
Understanding which overall features are important, together with any value-added services, represents the base elements of the buyer’s task. One of the benefits of SD-WAN surrounds how the technology supports the network in many more ways vs. traditional WAN services, due to breadth of capability.
I am also seeing more businesses turn to managed SD-WAN for certain aspects. Some are considering co-managed (perhaps just for provisioning at certain locations), but others are using end-to-end management to maximise capability. Digital transformation is facilitated by SD-WAN but there is a skill set and expertise required to make it happen.
SD-WAN is capable of transforming the way in which your business operates from single-pane-of-glass management to WAN optimisation, path selection, security and reporting. That said, the number of features available means IT teams must conduct comprehensive due diligence to align requirements with the right service. Some aspects of SD-WAN services are linked together — i.e., one feature is needed to enable the other. Zero-touch deployment is enhanced by the use of LTE wireless to quickly deploy WAN connections to branch offices at a moment’s notice where fast-start or WAN migration services are a requirement.
What are the top 10 SD-WAN options?
Note: I’ve taken some examples from our curated list of vendors, this is not an endorsement of capability.
Aryaka operates an MPLS network with local access to each PoP via the internet. The proposition is attempting to produce the best of both worlds, enabling customers to replace their traditional WAN while maintaining core network performance. The Aryaka network is located within 30 ms of 90% of the world’s enterprise users.
The overall proposition incorporates the aforementioned global network together with SmartConnect, which is Aryaka’s leading product with WAN Optimisation to assist across cloud and SaaS applications, SD-WAN functionality, real-time stats and reporting with 24/7 support. The Aryaka SmartCDN capability combines web & HTTP / HTTPS acceleration and caching.
A leading Visionary disrupter within the Gartner WAN Edge Infrastructure magic quadrant, Versa has sold 150,000+ software licences globally.
Security is an inherently important feature when comparing SD-WAN options. Versa’s Security service is NSS Labs recommended for enterprise-class software-defined features, such as NGFW and UTM. Versa’s Secure Cloud IP architecture is a multi-tenant cloud software platform integrating routing, SD-WAN, security and automation. Its Titan product is a cloud-managed service that delivers the enterprise-grade capabilities.
SD-WAN is often associated with a pure internet deployment model vs. the use of private WAN services, such as MPLS and VPLS. The CloudGenix SD-WAN product represents a good fit for companies looking to deploy a hybrid of connectivity to combine internet, MPLS and wireless services.
The proposition also markets CloudGenix’s drive to replace expensive routers with simpler PC-based x86 systems.
NetWolves promotes its proprietary technology for secure management and network monitoring, which results in an ability to better support managed services for diagnostics, repair and reporting.
Together with SD-WAN, NetWolves consolidated billing across multiple aspects of telecoms, including cloud and security. NetWolves is an interesting proposition, as it is positioned to resell and provision just about any type of connectivity (wireline and wireless) in order to create a custom system. The key is to remember its managed services portfolio as perhaps its most prevalent value-add aspect.
Masergy was (and still is) recognised for a well-scaled global network with support for Layer 2 and Layer 3 WAN services to meet the demands of mission-critical applications. In fact, its network is the largest independent SD-WAN platform in the world. As of writing this article, Masergy has evolved to become a leading provider of SD-WAN, security and cloud, which further enhances the world’s leading private network backbone.
Masergy’s marketing suggests the proposition and services are based on network functions virtualization, advanced machine learning and big data analytics to drive the flexibility, visibility and control that enterprise IT teams require.
Cato is promoting its capability to offer global reach, self-service with cloud agility in order to lower total cost of ownership vs. traditional MPLS networks.
The Cato service is based on a global SLA-backed network of PoPs, interconnected by multiple Tier 1 carriers that open up each connectivity product type, including 4G, 5G, broadband and resilient Ethernet. Enterprises connect to Cato over optimised and secure tunnels, using any last mile transport (MPLS, cable, xDSL, 4G/LTE), all backed up by policy-based routing per application and connectivity type to maximise SaaS and delay-sensitive applications.
Talari (acquired by Oracle) consolidates all aspects of networking into a single device that supports internet, MPLS and VPLS services.
Talari has been around in the SD-WAN technology space for many years and is considered a leader. Talari delivers advanced MPLS-class reliability and application QoE (Quality of Experience), which is highly trusted — as demonstrated with its EMS-911 and public safety unified contact centre support across the U.S.
The Talari product offers sub-second response times, ensuring any issues across the network are dealt with fast. With intelligent link aggregation and packet replication, Talari’s overall proposition reflects a true SD-WAN capability.
With eight global offices and live customer connections in 200+ countries, Expereo is a network aggregator that supports Cisco, Citrix, Silver Peak, VeloCloud and Viptela. In order to complement its network and relationships, Expereo offers professional services that include local smart hands, rack and stack, cabling and global site surveys.
Ignyte promotes its expertise across its engineering staff with a deep understanding of Cisco-based technologies. The Ignyte approach is vendor-agnostic, enabling its technical team to align your specific business requirements to the best possible product and service.
Ignyte has multiple examples of where its has integrated circuits that are still in contract (i.e., MPLS) with a hybrid WAN approach of SD-WAN where possible. If you are based outside of the U.K., Ignyte’s local NOC is U.S.-based but does offer 24/7/365 Tier 1 and 2 support.
10. Open Systems
Open Systems represents a mature offering with its experience delivering technology network services for more than 20 years.
Open Systems’ unified SD-WAN Platform has already integrated dozens of security, routing and performance features into a simple-to-deploy and administer service, backed by 24/7 monitoring and support.
The security aspect of Open Systems is included at all layers of the network, both at the WAN edge and within the cloud.
What are the comparison points to use when evaluating SD-WAN vendors?
In order to compare and evaluate each provider and their capabilities, I’ve listed 17 data comparison points to consider across HQ, branch offices and data center locations.
1. What is the focus technology?
The SD-WAN service is either vendor-based (think Cisco Meraki) or offered by network providers across various products and services. If your business engages with a telco, the options will not normally be limited to a single capability — Expereo is a good example of a network provider that offers multiple SD-WAN vendor services.
2. What other technologies are supported?
With point 1 in mind, in many instances, certain providers will focus on one SD-WAN product. It is, therefore, a prerequisite to understand which services are supported across each of your potential engagements.
Additionally, you will need to know overall capability including data center, WAN optimization, UCaaS (Unified Communications), cloud and security.
3. The SD-WAN elevator pitch
Understanding the offering in a few sentences will provide your IT team with some initial thoughts on whether the SD-WAN service is a good fit. As an example, does the vendor offer next-generation security or in-depth application reporting? How is single-pane-of-glass management achieved? How does the SD-WAN service work against packet loss or data security breaches? In respect of SD-WAN orchestration, how does the service deal with provisioning and centralized management? And does the SD-WAN offer consolidated WAN optimization, security and other value-added services?
4. Is the service sold stand-alone?
The majority of SD-WAN products are sold as a capability with or without network services. Select providers lead with the network and Layer one SD-WAN, meaning both aspects are tied to one contract. In other cases, certain providers offer a core network, which is intrinsically linked to the SD-WAN capability.
5. Does the capability support MPLS, VPLS and Layer 2 point to point and multipoint?
While SD-WAN products are typically internet-based, the original concept behind software-defined networking presented the ability to terminate any circuit type. We recognise certain vendors align more to the internet — Meraki is a good example where the proposition actually defines true SD-WAN as an internet-based platform.
6. Does the SD-WAN service offer global coverage?
Support of global services requires a focus on fix times, but also network connectivity. Where possible, I recommend single backbone ISP services to maintain the best possible traffic latency and jitter network performance across application traffic.
7. What is the difference between SD-WAN service providers and vendors?
Creating a matrix of differences allows IT teams to consider each feature. An example is whether the SD-WAN product supports cloud applications and services, such as Amazon AWS, or perhaps the service offers a certain type of redundancy.
Feature comparisons should also cover how application performance is enhanced with technologies — such as dynamic path selection — especially if a preferred route exists (or perhaps packet loss is occurring) across Ethernet, LTE and broadband, caching, WAN optimization and Quality of Service. Cost savings is also one of the main drivers behind SD-WAN.
8. What is the sweet spot of each in terms of market — e.g., SME or large global enterprise?
In certain cases, the capability will be aligned between five and 500 sites, others are up to 1,000 sites. Understanding where the product sits in terms of sweet spot quickly allows you to remove certain options from your list of the top providers and vendors.
9. Are certain providers a bad fit?
There are some cases where capability does not fit specific requirements. As an example, some products may not support five sites or fewer — or perhaps international reach is not a possibility.
10. What SD-WAN architecture is supported?
SD-WAN services are available as hardware-based, virtualised or edge-based network gateways. The core architecture of SD-WAN products and services might be based on private MPLS, which essentially provides the best of both worlds.
11. How is the SD-WAN providers core network structured?
Whether or not the provider offers its own core network backbone is perhaps more applicable to international deployments where latency and jitter network performance SLAs are important for delay-sensitive and mission-critical IP traffic.
If network providers operate core MPLS nodes, local ISP connectivity is purchased with connection via secure IPsec VPN to the closest WAN edge network node.
12. What portfolio of circuit types are offered?
Aside from selling circuits on the provider’s own network, does the product include 4G, 5G and broadband support? What other carriers can be integrated into the capability — e.g., private circuits or satellite?
13. How are ISP circuits managed across internet-based SD-WAN?
Management of SD-WAN circuits is critical to the ongoing success of your service. While cost savings from using the internet is clearly appealing, thought must be given to how monitoring and ticket resolution is performed.
13. How is management performed for the overall SD-WAN service?
In most cases, SD-WAN is viewed as a self-managed DIY wide area network. Where managed services are included, most providers and vendors will offer read access to devices and the opportunity to make config changes across certain elements.
14. Can you access an SD-WAN proof of concept?
A demo or proof of concept varies quite significantly. While Meraki will offer a trial period, others will not only offer trials and demos but also the ability to simulate bandwidth to really understand performance.
15. When did the SD-WAN product launch?
The year of product launch provides an indication of overall experience.
16. How many customers use the SD-WAN product?
Some providers and vendors will state their amount of connected customers is confidential, while others will supply data as an approximation. Connected customers is a valuable stat when looking at niche offerings or startups.
17. What cost savings can be achieved?
Sometimes, software-defined WAN isn’t all about saving money. What we do know is that consolidating and simplifying your network often results on a lower TCO (total cost of ownership) for a multitude of reasons.
The majority of SD-WAN marketing surrounds the total cost of ownership reduction when compared to MPLS. The reality depends on the features you are adding, together with recognising that some aspects of cost are not as tangible — i.e., an SD-WAN offers better network insights, support and so on. It is important to recognise that today, enterprise businesses should be able to deploy hybrid networking. Where required, MPLS or VPLS may make more sense than deploying internet SD-WAN. Using MPLS or VPLS does not preclude your organisation from leveraging SD-WAN benefits since select providers are suited to support any connectivity type.
Editor’s note: In this opinion piece, industry analyst Zeus Kerravala shares his thoughts on open data centers and Extreme Networks’ approach. Extreme Networks is a client of Kerravala’s ZK Research, a consulting firm based in Westminster, Mass.
Open is an overused word in the networking industry. Every vendor claims to be open. The degree of openness can vary greatly from vendor to vendor, however. In theory, having one open API is technically open. An engineer or developer can’t do much with a single API, but it is open.
Consider mobile phones, for example. Apple’s interfaces are open, but they only let developers and partners access the features Apple enables — and the rest of the phone is locked down. This is also true in networking. As a result, network professionals need to do their homework and make sure they ask the right questions about open data centers.
The terms “open” and “standards-based” are often used interchangeably, but they are two different things. A vendor can be open and standards-based or closed and standards-based. Similarly, the vendor can be proprietary and open or proprietary and closed. For example, Apple’s open interfaces are proprietary, as is Windows. Linux is open and standards-based, so anyone can work on it.
Network professionals considering a network refresh in the data center are likely to hear the word open tossed around a lot. But networking pros need to understand how open data centers and vendors actually are. I would start by asking the following questions:
- Is your service multi-vendor?
- Which vendors do you support?
- Do you support just network vendors or can you support other technologies, such as application delivery controllers and security appliances?
- How big is the community of developers that uses your software products?
- How easily can I interface with other users of the product?
Open source tool enables automation
One networking vendor, Extreme Networks, unveiled this week a data center product with an emphasis on open. Extreme’s new Agile Data Center comprises several products, including Extreme Workflow Composer, Embedded Fabric Automation, Extreme Management Center (XMC), ExtremeAnalytics and two new hardware platforms.
This is the first major data center announcement for Extreme since it acquired Brocade’s data center assets in 2017. Agile Data Center combines Brocade and Extreme technologies.
Instead of relying heavily on APIs for network programmability and automation, Extreme Workflow Composer is a network automation tool based on StackStorm, an open source platform designed for runbook automation. It’s based on DevOps automation and focuses primarily on running workflows.
Extreme could have built Workflow Composer as a proprietary tool. Instead, it opted for an open source product, which is familiar to engineers, and provides an ecosystem that’s creating integration packs.
Multi-vendor network integration
Extreme’s definition of open essentially means no vendor lock-in. WorkFlow Composer can automate workflows across any vendor, including Arista, Cisco and Juniper. Extreme can integrate with more than 100 vendors that have integration packs on exchange.stackstorm.org. Customers may have to tweak the code some, but they do not have to start with a blank sheet of paper.
StackStorm extends beyond networking, too. As a result, engineers who use Workflow Composer can extend the automation capabilities to things like Palo Alto and Check Point firewalls, VMware vSphere, ServiceNow’s service desk and others.
You could argue the network is the foundation of a modernized data center as it provides the connectivity fabric between everything. But open data centers incorporate more than just networking. By building Workflow Composer on StackStorm, Extreme can orchestrate and automate workflows from the network to the application — and everything in between.
Getting its foot in the door
Additionally, XMC is also designed for multi-vendor management with a goal of helping customers transition from an old data center to a modernized one without having to rip and replace. Extreme’s strategy here is not to be magnanimous, but rather it’s a strategic bet.
Despite being close to $1 billion in revenue, Extreme is still a minority-share player in networking. Don’t expect a customer to take out its existing infrastructure and replace it with Extreme overnight.
A better approach is to help customers manage their environments and when the network devices come up for renewal, compete for the business then. This approach is similar to the one Aruba Networks took when it was a startup with its AirWave management tool. At one time, it managed Cisco access points (APs) better than Cisco’s own tools. This strategy helped Aruba get its foot in the door of many companies, which it eventually parlayed into the AP business.
Editor’s note: In this opinion piece, industry analyst Zeus Kerravala shares his thoughts on 400 Gigabit Ethernet adoption and Arista Networks’ 400 GbE approach. Arista is a client of Kerravala’s ZK Research, a consulting firm based in Westminster, Mass.
Speeds of 400 Gigabit Ethernet might seem futuristic. Currently, however, some use cases make sense. For example, advanced applications — such as artificial intelligence, virtual reality and serverless computing — require three components: fast networks, storage and compute. If any of these three components falters, the application won’t work optimally.
The rise of GPUs as a data center resource has seen compute speeds grow exponentially. Flash storage and NVMe have given storage performance an exponential jump. And 400 Gigabit Ethernet will enable the network to keep up.
As an example, I recently chatted with a data scientist at a healthcare group in the Boston area. He told me his organization’s biggest AI challenge is ensuring GPUs are fed enough data to keep them busy.
In this case, a next-generation server helps. But the organization also needs a network that can send the volume of data to increase the use of the GPUs to near peak. Over-investing in any one area wastes money. So, the three legs of the application stool must be kept in lockstep.
400 Gigabit Ethernet needs solid support system
The technology is not the problem. Early adopters, such as web-scale companies, will likely adopt 400 GbE as soon as it’s available because they deploy all technology this way.
For the rest of the world, though, adoption of 400 GbE networking requires greater ecosystem support. For example, the technology needs better availability and lower price optics, cabling and server connectors.
Next year will bring 400 GbE products to market, but it won’t deliver the large ecosystem required for mass adoption. Also, over the next 12 months, the price of 400 GbE, particularly the optics, will fall — making it more affordable for everyone.
A network that operates at 400 Gigabit Ethernet may seem like overkill. But, if I’ve learned one thing in nearly 40 years in this industry, no matter how much bandwidth is available, we find a way to consume it. Even if 400 GbE is not right for your business today — because of price or other factors — you should still educate yourself on the different options so you can make the right decision when the time comes.
Network needs to keep up with other trends
One networking vendor, Arista Networks in Santa Clara, Calif., provided some details this week on its 400 Gigabit Ethernet roadmap. The vendor’s new 7060X4 Series offers 32 ports of 400 GbE in a 1 rack unit chassis. The products are based on Broadcom Tomahawk 3 silicon that offers 12.8 Tbps of switching capacity.
Customers that deploy the switch have the option of splitting each port into four 100 GbE ports for a total of 128 100 GbE ports. Network managers can deploy them as 100 GbE today and migrate to 400 GbE, if required.
In 2015, Arista introduced its 7060X with 32 ports of 100 GbE enabled by 128 lanes of 25 Gig serializer/deserializer (SerDes). In 2017, the 7260X3 Series brought 64 ports of 100 GbE using 256 lanes of 25 Gig SerDes.
Now, in 2018, the 7060X4 has 32 ports of 400 GbE on a switch with 256 lanes of 50 Gig SerDes. This evolution represents a fourfold increase in capacity in about four years. Additionally, the 7060X4 Series features new traffic management and load balancing capabilities.
Given the growth in data and bandwidth use, this kind of Moore’s Law performance is crucial for the network to keep up with other tech trends, such as GPUs and flash storage.
A debate over optics
Arista’s 400 GbE series is available in two configurations that are virtually identical except for the optical connectors they support. The 7060PX4-32 model uses OSFP optics, and the 7060DX4-32 supports QSFP-DD optics.
Currently, in the networking industry, there’s a debate as to which optic is “better” — and the answer depends on the customer. QSFP-DD is backward-compatible with QSFP-100 optics, making it ideal for customers who want to migrate slowly from 100 GbE to 400 GbE.
From a technology perspective, OSFP optics are better because they cool easier, consume less power and have more options. OSFP connectors support 100km standard mode fiber, making it well-suited for data center interconnect, while QSFP-DD does not provide this support.
Arista does offer a passive OSFP-to-QSFP adapter, enabling customers to deploy a 400 GbE switch today and right it at 100. Ultimately, though, customers make the choice on whether they want better rear-facing features for ease of migration or better forward-looking ones.
Personally, I favor the OSFP connector because the type of organization that would deploy 400 GbE today would benefit from its additional capabilities. I understand the appeal of the QSFP-DD connectors. But, migrating slowly to 400 GbE seems to be an oxymoron.
Editor’s note: Industry analyst Zeus Kerravala provides his thoughts on Arista Networks’ effort to unify wired and wireless networks. Arista is a client of Kerravala’s ZK Research, a consulting firm based in Westminster, Mass.
Anyone who has used a wireless device has likely experienced a scenario where the device was connected to the access point but no network services worked. Or perhaps the device was connected, got booted off, and the user couldn’t re-establish connectivity. These problems have been around as long as Wi-Fi and can affect worker productivity and company revenue.
In the past, Wi-Fi flakiness was annoying, but it wasn’t business-critical because wireless was considered a network of convenience. Today, however, that has changed. Many workers need Wi-Fi to do their jobs because roaming around a campus has become the norm.
Also, Wi-Fi-connected IoT devices have proliferated. Consequently, wireless network outages or performance problems will result in key business processes not functioning properly.
Network administrators have a hard time troubleshooting Wi-Fi problems. A recent ZK Research survey found many network engineers spend about 20% of their time troubleshooting Wi-Fi issues. Often the problem disappears before it’s fixed. But the root cause is still there, and the issue will likely re-emerge.
The Wi-Fi network is now mission-critical and arguably as important as the data center network.
Data center and campus edge come together
Networking vendor Arista Networks, based in Santa Clara, Calif., is looking to address Wi-Fi issues. The company announced this week its Cognitive Campus architecture — a suite of tools that unifies wired and wireless networks by applying a software-driven approach to the campus. To date, Arista has found most of its success by selling its products into data centers.
Cognitive Campus sheds some light on Arista’s planned acquisition of Mojo Networks. Earlier this year, Arista said it would acquire Mojo, a company that sells its products at the campus edge, signaling it wants to be a bigger player in the enterprise networking market.
Arista has other campus products, but they’re targeted at the campus core where the requirements are similar to the data center. As a result, Mojo is Arista’s first true campus edge offering.
Specifically, Arista is looking to use Mojo’s Cognitive WiFi to remove traditional bottlenecks created by Wi-Fi controllers. Traditional Wi-Fi products have focused on ensuring connectivity rather than understanding application performance or client health.
Cognitive WiFi — combined with Arista’s CloudVision management suite — looks to provide better visibility into network performance so network engineers can identify the source of a Wi-Fi problem before it affects business. Arista has integrated the wireless edge information into CloudVision.
Mojo’s management model disaggregated the control and data planes so its cloud controller only handles management and configuration updates. If the access points (APs) lost the connection to the controller, the network would continue to operate. Most other APs would stop working if controller connectivity was lost.
As part of Cognitive Campus, Arista can aggregate data from the wired network and combine it with wireless data to perform cognitive analytics across the network.
The importance of analytics
Arista’s planned acquisition of Mojo left some industry observers puzzled. On the surface, a data center and the wireless edge don’t have much in common.
However, the intersection of the two spawns a treasure trove of data. As a result, analytics of the information can be used to transform the network. Arista’s Cognitive software brings some visibility and intelligence to the campus network.
Network professionals should rethink network operations and embrace the analytics and automation entering the campus network.
For the past five years, my advice to engineers has been: If you’re doing something today that’s not strategic to your company or resume, don’t do it, and find a way to automate it. Wireless connectivity and performance issues are excellent examples of this advice.
I’ve never heard of engineers getting hired because they were really good at solving problems that shouldn’t happen in the first place. Focus on software skills, data analytics and architecture, and understanding the user experience. Those skills are required in the digital era.
It’s fair to say the cloud has become a core component of most organizations’ IT strategies. The growth of public cloud services — such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) — has been remarkable as application developers and IT professionals look to simplify the way they work and speed up deployment times.
As is the case with most things, however, for every yin there’s a yang. The cloud does have a dark side. In this case, it’s security. When I think of the effect the cloud has had on network and security teams, I’m reminded of the scene in Shakespeare’s Julius Caesar when Mark Antony shouts, “Cry ‘havoc!’, and let slip the dogs of war.”
Cry havoc indeed. The tight control IT used to have on the environment is now gone as public cloud services are largely dependent on the internet for transport, making workloads easier to breach than if they were in a tightly controlled data center. Some of the more common challenges when using public clouds include not having visibility across multiple clouds, lack of centralization, keeping up with compliance mandates, detecting threats and responding fast enough.
These problems have given rise to many security startups aimed at solving a piece of the cloud security puzzle. This, of course, introduces new challenges as securing the cloud requires some manual correlation of data, which can be slow, time-consuming and inaccurate and leads to gaps in coverage. Cloud security is much like a puzzle that hasn’t been put together. All the pieces are there, but it takes a lot of effort to get the full picture.
Palo Alto RedLock acquisition automates cloud security
Palo Alto Networks is trying to simplify the process of securing the cloud. Coming into 2018, the company had solid network and endpoint security products. In March, it added cloud security vendor Evident in a $300 million acquisition. Evident brought a rich set of cloud compliance capabilities to the Palo Alto platform.
This week, Palo Alto announced its intention to buy cloud threat defense vendor RedLock for $173 million. RedLock’s strength is its analytic and automation capabilities that help network and security teams replace manual inspection of network traffic with automated, real-time remediation.
RedLock’s products capture detailed events in all major public cloud platforms to quickly see and fix threats. The cloud vendor correlates resource configurations with network traffic and third-party feeds to expose vulnerabilities and identify compromised accounts and find insider threats via analysis of user behavior. The product then automates remediation by integrating into existing incident response workflows.
For example, if a developer accidentally leaked cloud access keys on a site such as GitHub, a hacker could steal them and break into the cloud environment using those keys. RedLock’s analytic engine would recognize the key was being used in a strange location to do unusual things and immediately alert the security team with a full history of activities associated with that key.
Analytics and security go hand in hand
I can’t overstate the importance of analytics in an organization’s security strategy. Simply too much data comes in from too many sources to be analyzed manually. Highly skilled network and security professionals might have been able to do things manually in the past, but today it’s just not possible. Analytics and automation should be viewed as an engineer’s best friend as it can greatly augment skill sets.
The addition of RedLock brings Palo Alto customers some benefit today. But, over time, the company plans to integrate the two platforms, creating a “1+1 = 3” scenario. The combination of Evident and RedLock brings the following capabilities to Palo Alto customers in a single platform:
- Continuous discovery and inventory of public clouds via a centralized dashboard that shows assets across AWS, GCP and Azure across multiple accounts and regions;
- Real-time compliance reporting for industry standards such as NIST, PCI, HIPAA, GDPR and CIS. Customers can access customized reports with a single click;
- Ability to prioritize vulnerabilities, detect cloud threats and investigate incidents in minutes, as well as provide automated remediation of security, risks and policy violations across all major clouds.
Additionally, Palo Alto’s other products can be used to protect other parts of the cloud ecosystem. For example, its VM-Series products protect and segment cloud workloads, and Traps secures operating systems and applications within workloads.
Cloud security doesn’t have an “easy button” because it requires multiple products to protect the different areas of the cloud. The addition of RedLock brings rich analytic capabilities, enhancing Palo Alto’s already-robust cloud security portfolio that now offers protection and compliance across the entire public cloud journey.
Cisco said it would acquire SD-WAN vendor Viptela Inc. in a deal valued at $610 million. The transaction, expected to close later this year, will fortify Cisco’s existing SD-WAN portfolio with additional cloud-based services, the company said in a blog post.
“Viptela’s technology is cloud-first, with a focus on simplicity and ease of deployment while simultaneously providing a rich set of capabilities and scale,” said Scott Harrell, Cisco’s senior vice president of product management, Enterprise Networking Group, in a statement. “With Viptela and Cisco, we will be able to deliver a comprehensive portfolio of comprehensive on-premises, hybrid and cloud-based SD-WAN solutions.”
Viptela, based in San Jose, Calif., emerged from stealth three years ago with an SD-WAN framework it called the Secure Extensible Network (SEN). The platform includes physical vEdge routers that form a secure data plane; a central controller runs on an x86 server–either on-site or in the cloud–and orchestrates connectivity among the routers. Viptela sells its framework to enterprises and its technology is also used to underpin managed SD-WAN services offered by a number of providers, including Verizon and Singtel.
SD-WAN continues to be a hot market, with revenues expected to eclipse $6 billion by 2020, according to IDC.
Cisco said the acquisition will dovetail with its Digital Network Architecture strategy to support software-driven networks that are more programmable, responsive and dynamic. Viptela will join Cisco’s enterprise routing unit within the vendor’s Networking and Security Group, led by Senior Vice President and General Manager David Geockeler.
There has been a fair amount of credence given to some media statements that are suggesting MPLS network services are decreasing in popularity.
I believe WAN technology is becoming less about products but more about capability due to the rise of cloud services and the use of the Internet.
A few years ago, the default IT Management decision was split between companies with a direct interest in private services (MPLS) and companies with an interest in public services (the IPSec VPN). I appreciate this is kind of view is over simplifying things but you get the idea.
Today, IT capability is becoming complicated. The hard part is figuring out how to service the unbelievable technology we hold in our hands together with the resources on offer from cloud vendors.
When I say, unbelievable, I really do mean it in the true sense of the word.
The latest phones have processing power comparable to desktop PCs of only a few years ago. When these devices are coupled with access to applications which reside on both the corporate infrastructure and the Internet, they become one of the most valuable devices within the Enterprise.
If you read various telecoms publications, experts believe the future is mobile, I think they might be right. (Note, clearly laptops remain very relevant in combination with tablets and phones.)
The way we work is directly taking us down a path of network design and architecture which is more flexible.
MPLS vs SD-WAN
Perhaps the biggest challenger for proponents of MPLS networks is SD-WAN. SDN promises to free the Enterprise from restrictive, private MPLS by offering granular traffic prioritisation, security and privacy within a single box or application.
The fact remains, careful consideration must be given to the underlying connectivity used with SD-WAN. There is no doubt the Internet is a scaled platform vs even a few years ago but IT teams must still consider the laws of physics. In other words, the distance between locations on a global network coupled with the use of multiple ISP backbones could degrade performance.
Fig 1 is an example of an SD-WAN network deployed over multiple ISP backbones.
With this said, regardless of the actual product, analysis of your specific requirements vs the provider’s capability remains as important as ever. If your organisation begins by asking the right questions, the answers will determine if any WAN product is fit for purpose, this applies to both MPLS, VPLS and SDN.
First, the top 3 reasons why MPLS remains relevant.
1. MPLS VPN services are delivered across private infrastructures, IPSec and other encryption services are not required.
2. MPLS QoS (Quality of Service) provides the Enterprise with an ability to prioritise applications including real time traffic – Voice, Video – and other mission critical apps such as Citrix.
3. Service level agreements which include latency, jitter, uptime and other performance factors are generally more focussed across private based infrastructures.
And now the top 3 reasons why SD-WAN services are of interest.
- SD-WAN typically leverages the Internet to deliver secure, highly flexible, encrypted services to any form of device and connectivity.
- The promise of SD-WAN services offers complete control and flexibility via easy to use software driven portals.
- The innovation of SDN means the Enterprise should see new features and enhancements released on a much more regular time frame vs MPLS network products.
Is a hybrid WAN the future?
The typical network today is not generally based on a single product or service. In fact, the majority of deployments consist of core MPLS network connectivity between key offices with SD-WAN (or DMVPN) connectivity over the Internet for smaller offices and remote users. Data Centre and hosting facilities are connected via layer 2 VPLS or point to point / multipoint connectivity.
(Architecture and WAN design is out of scope here).
As I have previously mentioned, the requirements for Enterprise business is not simply based on one platform. The result is generally a hybrid. There are exceptions here, a national or well scaled global architecture could be delivered over a single SD-WAN deployment but as we mentioned earlier, careful thought must be given to application performance in terms if packet latency.
I have personally worked with one organisation in the US where their platform is based on hybrid connectivity. The circuit is delivered as a point to point Ethernet into the provider’s network but an intelligent device allows the business to decides what the circuit should become, i.e. layer 2, layer 3 or even Internet. (If you would benefit from knowing the provider name, drop me a message.)
Everybody has an opinion on whether MPLS is in decline. I tend to take the view that actually ‘private’ based connectivity will always be a requirement for Enterprise business, even just from the perspective of privacy. So, no, MPLS is not doomed and should remain an essential part of the tool-kit.
I do believe that SD-WAN will seriously erode the popularity of MPLS VPN as the default WAN type, especially for organisations where they are able to predict performance of their Internet connectivity.
If you are either deploying SD-WAN over a single IP backbone or multiple ‘known’ backbones, there is every possibility SD-WAN could be the only technology required depending on your view point.
The worldwide Ethernet switch market grew 2%, racking up revenues of $6.29 billion in the third quarter, according to IDC’s Worldwide Quarterly Ethernet Switch Tracker and Worldwide Quarterly Router Tracker reports.
Router revenues, meantime, rose 2.6%, to $3.56 billion as enterprises and service providers beefed up their infrastructures.
Cisco continued to see erosion in its Ethernet switch market share, IDC said, with the vendor now capturing 57% of the market, down 6.5% from Q3 2015 totals. Hewlett Packard Enterprise’s switch sales also fell, but Juniper Networks and Arista Networks both saw increases in Ethernet switch sales, with Arista notching a 31.5% hike in revenues year over year. Huawei’s Ethernet switch sales almost doubled in the period; the Chinese vendor now has 7.2% of the switching market.
“Recent macro-economic developments and maturing IT architectures have led to a spectrum of reactions by IT decision-makers across the regions with regard to Ethernet switching investments in 3Q 16,” said Rohit Mehra, IDC’s vice president of network infrastructure, in a statement. “Strong growth in the 40 GbE and 100 GbE segments specific to data center deployments brought a degree of stabilization to a market in transition where the enterprise campus market for switching declined.”
40 GbE switch sales grow
IDC said 10 GbE switch sales dropped 1.3% year over year, to $2.22 billion, while 40 GbE switch revenue jumped 20%, to $756.4 million. The two Ethernet switch market standards are now being joined by an emerging 100 GbE switch market, which saw a tripling in revenues on an annualized basis in the third quarter of the year. One-GbE switch revenue dropped 4.3% year over year, IDC said.
The increase in router sales was sparked by an 8.2% increase in enterprise routing, IDC said, cautioning that the market bears close review as more companies evaluate the use of new SD-WAN technologies.
“Software-defined network architectures and network transformation for the digital economy are among the factors shaking up the core network infrastructure segments,” said Petr Jirovsky, IDC’s research manager, Worldwide Networking Trackers, in a statement.
Broadcom Ltd. Nov. 2 said it would acquire storage and networking supplier Brocade Communications Systems Inc. in a deal valued at $5.9 billion.
Chip-maker Broadcom said it will keep Brocade’s Fibre Channel and storage area networking line but will sell the company’s IP networking business, which includes routing, switching and wireless technologies that it just recently acquired from Ruckus Wireless.
“This strategic acquisition enhances Broadcom’s position as one of the leading providers of enterprise storage connectivity solutions to OEM customers,” said Hock Tan, Broadcom’s CEO, in a statement. “With deep expertise in mission-critical storage networking, Brocade increases our ability to address the evolving needs of our OEM customers. In addition, we are confident that we will find a great home for Brocade’s valuable IP networking business that will best position that business for its next phase of growth.”
Since acquiring Foundry Networks eight years ago, Brocade has struggled to carve a significant niche in the enterprise networking market. Broadcom is selling the IP business in part so that its current relationship with networking customers that buy its chips–which include Cisco and Juniper–won’t be imperiled.
Broadcom said the transaction is expected to close in mid-2017.
What does the coming decade hold in store for networking’s future?
User dm18 mentioned shifts in networking that would favor software and IoT. Some of the possible changes proposed might include self-organizing networks without the need for manual configuration, cloud-based systems to automate threat response, patch management and backup. The same user projected widespread interconnection, with outdoor access points powered by built-in batteries and solar panels and interconnections between access control, cameras, climate control, lights, firearms, facial recognition and appliances.
Device density might necessitate more organized networks and eliminate large segments of home networking as telcos and large IT companies like Google step in to provide free, city-wide wireless. Managing huge quantities of data—perhaps transmitted wirelessly—might mean a new emphasis on data compression.
Some users offered up networking humor in response to the question about the future of computer networking. “In 15 years, all network gear (switches, routers, etc.) will have built in Jet Packs so that they won’t need a rack, they will just hover on jets in the designated space,” one user commented. However, others struck a more serious note, suggesting widespread mesh networking and SDN fully fulfilling its promises by 2031.
Networks will automate
“Large-budget networks will automate; small-budget networks won’t,” said user jiannone, looking to networking’s future. “Small-budget networks [will] get by on branded whiteboxes [sic] with licensing and support fees attached to low cost, high-enough throughput boxes that are more ASIC than general CPU architectures,” the user added, suggesting that today’s entertainment infrastructure may shift to IP.
User patchate brought up PCI express switching in the discussion of the future of computer networking, “It doesn’t offer any compatibility with current Ethernet-based technologies, but the underlying technology seems sound to me, at least for short-range interconnects.” The user added, enthusiastically, “If ToR switches could be replaced by PCI express lanes with CPUs having DMA access to any device installed within a rack, that would be so very incredibly awesome.”