Posted by: Ryan Shopp
Accellent, Alcatel-Lucent, Application monitoring, Compuware, DataCenter, HP Software, InfoVista, NetQoS, NetScout, Network monitoring, Opnet, Packet Design, Performance management, SolarWinds, Xangati
Next up, I plan to dig into this sector a little deeper (as always from a purely data center centric perspective – aka no End-User Monitoring that requires a desktop agent).
The priority for these products is to provide an end-to-end service/application perspective on traffic performance and capacity. The goals; help quickly troubleshoot from an application or end-point perspective OR better understand what/where traffic levels are going across the infrastructure. All this from a network-centric control point (no loading of agents on a server or client – since the network team doesn’t own the responsibility for those).
So on the surface I see two main categories (each has subcategories that I’ll dig into during follow-up posts)
Flow Reporting-centric (these vendors gather Cisco NetFlow, J-flow, sFlow from infrastructure agents and report in various ways)
- Netscout, Solarwinds, CA eHealth, NetQoS, Mazu Networks, Xangati, InfoVista, Opnet, Lancope, Packet Design, Q1 Labs. Alcatel-Lucent VitaNet, HP Performance Insight – to name a few
Flow Self-Collection & Reporting (these vendors span/tap actual traffic flows and report in various ways)
- NetQoS, Mazu Networks, InfoVista (through acquisition of Accellent), Lancope, CA Wily, Q1 Labs, Compuware – to name a few
I quickly notice now that many of the vendors actually support both – which I assume is about flexibility as some customers don’t have NetFlow type capabilities enabled or don’t wish to enabling them for a variety of reasons.
So my first set of questions/experiences I’m now reading/researching about are:
1) What are the key benefits to going the self-collection route over the Reporting only route? Unique metrics? Scalability? Limitations around NetFlow (e.g., Performance)
2) When it comes to reporting only using Netflow, etc – what metrics are being used these days.
I remember first integrating and being able to report on RMON2 probes and early Cisco NetFlow data back in 2001 within the Lucent VitalNet product…so where are things 6 years later now that NetFlow is much more pervasive and I’m sure improved.
My assumption on some of these are as follows (vendors & users please leave comments to help educate me for my follow-up posts),
When it comes to reporting, there are historical/capacity centric reports & their are real-time/troubleshooting centric views. My assumption (again, currently an assumption..I haven’t read to much on this topic yet) is most the reporting centric vendors (that don’t also offer their own passive flow monitoring capability) are focused more on those historical/capacity reports (e.g., eHealth, Solarwinds, InfoVista). These reports are how much data is going where and what type of data is it over a day/week/month etc. Once this data is archived, they slide & dice in a variety of ways. But, basically it’s about looking at it for trends over time.
Now, when it comes to real-time, since so much data is coming in so quickly their needs to be extra intelligence/automation helping out – building a “what looks normal” model and then focusing on identifying and then alerting someone when something “odd” is noted. Of course, they need to store/report on much of the same data as the historic/capacity centric products as they build credibility and trust in their users.
So when it comes down to it..much of the same data is being used for 2 unique users…one focused on planning improvements and the other focused on quickly resolving issues. So now that I’ve finished writing this post a better way to probably organize the field of play is not by technology (NetFlow vs. Self-Collect) but by usage. I’ll read some more and do that next time.
Another angle to ponder on this topic will be around the WAN acceleration/optimization vendors…but again, for another day.