Posted by: Ryan Shopp
Analytics, BMC, DataCenter, eg innovations, HP Software, Indicative, Integrien, NetQoS, Netuitive, Opnet
So in part 1 we talked through the collection of performance/capacity/availability data. Next up is focused on where innovations using this collected data are taking us.
The next level of Performance & Availability I previously mentioned are coming from a variety of companies doing cross-metric analysis or even automated behavioral analytics. These vendors are typically classify themselves as Service Level Management, some types of Business Service Management or Analytics. They either leverage a variety of data collection entities or they themselves offer capabilities that span multiple sources to elevate and/or automate results in the hope of proactive (even predictive) identification of issues with minimal (striving for zero) false positives. Here are some more thoughts on each of these areas:
- Service Level Agreement vendors seem to focus on leveraging a variety of data sources/metrics and normalizing them into very detailed quality of service/performance agreements between a service provider and their customers (in some situations the service provider is the internal IT department themselves).
- Business Service Management vendors in the realm of performance/capacity/availability seem to focus on the mapping of each business service (e.g., application(s) and the infrastructure that supports those application(s) from and end-to-end perspective). Then, if any component in the mapped bundle shows signs of trouble, an alert is raised for proactive resolution. NOTE: BSM is a very broad term – I’m focusing it down here on just this functional area, I’m not talking comprehensive dashboard spanning all functional areas, service desks etc.
- Real-time Analytic vendors seem to leverage a variety of time-series metrics from various collection sources mapped together appropriately (like BSM), then using behavioral algorithms they dynamically determine normal behavior. If something deviates from that behavior then in real-time it raises an alarm (now were getting predictive).
- Historical Analytics or modeling/simulation vendors seem to leverage a variety of data sources coupled with other cross-functional details (e.g., CMDB, configuration settings) to establish a model and expected behavior. Then you can tweak, tune or even re-design to see impact of potential changes, upgrades, etc.
We could probably come up with better names for these higher level performance/capacity/availability areas but Service Level Management, Business Service Management and Performance Analytics are the ones on the marketing being advertised today.
One area of data collection and reporting that does continue to innovate is from the end-user, passive traffic flow perspective. This first popped up on the scene back in the last 1990′s and since then there seems to have been a major resurgence in vendors focusing on specific, mission-critical applications. Since these agents typically reside and monitor from the desktop or mobile device perspective I’ve placed them beyond the scope and control of Data Center Automation. Some vendors are doing the end-to-end monitoring (as mentioned before) from an appliance in the data center making some TCP/IP assumptions (e.g., NetQoS, CA Wily).
So now we’ve discussed Performance/Capacity/Availability management and how it also has analytics occurring within that functional silo. So what does that mean to the Data Center Automation Blueprint from my perspective. Stay tuned for part 3.