Two months ago I wrote a story about Cisco’s Overlay Transport Virtualization (OTV) data center interconnect technology, which Cisco claims can take a lot of complexity out of data center interconnects and simplify the migration of virtual machines across data centers via technologies like VMware’s vMotion.
Systems engineer Kenneth Hellmann read the story recently and took exception to some of the claims made.
I was just reading your “Cisco data center interconnect aims to fix vMotion network trouble”. The following section left me speechless:
[“Between them, you’re running MPLS and a VPLS tunnel. That’s complex. It’s hard to configure. You have to have an MPLS network. You have to configure the VPLS tunnel between them as an overlay. VPLS configuration is notoriously complex. Then you have to optimize performance.”
What’s more, all that work with an MPLS and VPLS only sets up a connection between two data centers, Antonopoulos pointed out. If an enterprise wants to establish virtual server migration between three or more data centers, each data, engineers will have to build links between all of them. “Data center A will have to be connected to data center B,” he said. “Data center B will have to be connected to data center C, and data center C will have to be connected to data center A.”
Cisco’s Griffin claimed that configuring a data center interconnect for virtual server migration between two data centers can take months with existing technologies, whereas the OTV feature can be set up in five minutes.]
I freely admit that I am a Systems Engineer for [REDCATED], so you may see me as biased, but those statements are just the purest of nonsense. I teach a 5 day MPLS Configuration class and the L2 VPN section takes 4 hours (which includes two labs). That is for VLL and VPLS. And everyone gets it. Why not…it’s incredibly easy. For your own information, here are the configuration lines (over and above the normal configuration needed for an OSPF router) to turn on MPLS and configure a VPLS between 3 sites (of course you must have a similar configuration on the other two PE routers):
mpls-interface e 1/1
vpls Datacenter 20000
vpls-peer 192.168.2.1 192.168.3.1
tagged e 3/1
Yes, that’s it. Now tell me, how does that correlate with the following statements in your article?:
1) “Between them, you’re running MPLS and a VPLS tunnel. That’s complex. It’s hard to configure”
2) “VPLS configuration is notoriously complex”
3) “Cisco’s Griffin claimed that configuring a data center interconnect for virtual server migration between two data centers can take months with existing technologies”
Do you reporters ever vet what you are told or is Cisco given a pass on everything they say? If a CNN reporter reports a third party story which later turns out to be bogus, he is publicly flogged. At the very least he writes a retraction.
How about you?
I won’t be running a retraction or submitting to a public flogging, but I am happy to reproduce Kenneth’s email here so his point of view can be shared.
I should also point out the first two paragraphs Kenneth reproduced from my story are quotations and paraphrases of statements made by an independent third party, Andreas Antonopoulos, senior vice president at Nemertes Research, rather than a Cisco representative. I wasn’t serving as a Cisco stenographer on this story.
Also, I’m an editor, not an engineer. So I can only rely on what independent third parties tell me about products and technologies. It’s an unfortunate limitation, I freely admit.
On a related note, Abner Germanow of Juniper Networks pointed out that I should have mentioned in my original story that Cisco’s OTV technology only works in a Cisco environment.
Video. Video. Video. Yup, we keep hearing the drumbeat, too.
Although the jury’s still out on how many enterprises are extensively using video (or plan to this year), Cisco would certainly like you to believe its ascent is as certain as the rising and setting of the sun. And why not? More powerful networks = more ritzy equipment for them to sell, right?
In the event that I’m eating my cynical words over the next few months, here’s an entertaining if not ominous look at what Cisco sees as the fate of enterprises that cheap out on video over WLAN:
[kml_flashembed movie=”http://www.youtube.com/v/InWWHKsG8bg” width=”425″ height=”350″ wmode=”transparent” /]
As we went to press with this week’s story about the most recent supply chain backup at Cisco — part of a broader supply chain problem that has persisted for the past few months — we hadn’t heard much in way of a response from Cisco executives about the six month-plus delays on Cisco’s ASA firewalls and other networking gear customers and partners have been experiencing.
This came into my inbox Thursday around noontime ET (sadly, past deadline), from corporate spokesman John Earnhardt:
“As we mentioned during our last quarterly conference call, we have experienced longer lead times on several of our products. This was the result of increased demand driven by the improvement in our overall markets. And, similar to what is happening in the entire industry we are seeing some product lead time extensions stemming from supplier constraints. We continue to build upon our strong relationships with our suppliers to proactively manage our supply chain and minimize any potential impact to our customers and partners.”
The statement is pretty much what Cisco wrote, as John mentions, in their Section 1A of their 4Q09 report about risk factors for investors.
As if the whole situation wasn’t causing networking pros enough headaches, reps form Network Hardware Resale tell us that low supply is pushing prices up as well. Ouch.
Let’s see of a show of (virtual) hands. Have you had any nasty backorders on networking gear — Cisco or otherwise? What kind of prices are you seeing?
Cisco channel partners and users are having difficulty getting their hands on Cisco ASA firewalls. This news surfaced during a SearchITChannel advisory board meeting, when more than one member Cisco VAR said they were unable to access Cisco ASAs both directly and through distributors.
Cisco didn’t immediately return emails regarding the Cisco ASA firewall shortage, but executives at a partner distributor confirmed that there is a backlog that won’t be resolved until May or so – and it’s on Cisco’s end.
The distie executives said they don’t know the cause of the backlog, but noted that Cisco has had its share of supply chain problems in recent months. In January, disties and channel partners found themselves unable to access core networking products due to supply chain shortages, Channel Insider reported. Cisco chalked that up to an unexpected surge in demand.
Maybe answers explaining the Cisco ASA firewall shortage will arise soon.
Cisco may be pushing users to acquire blade server skills, but 64% of CCIEs in a recent survey said that risk management and network security will be the most crucial networking skills to have in the next five years. One in three of the same group said network security breaches will remain among the top concerns of CIOs over the next five years.
Cisco (using the research firm Illuminas) surveyed 970 CCIEs internationally (as part of a 15 year celebration of the CCIE launch) to determine what the digital infrastructure landscape will look like in the next five years.
Virtualization played a large role in the survey with another 67% of respondents saying the technology would be the top networking investment over the next five years as CIOs push to reduce power consumption and spending. After years of virtualization being sold as a data center and systems technology, these results highlight the crucial role of networking in virtualization and vice versa.
Virtualization will also introduce network complexity and management challenges, according to those surveyed, so 56% of respondents said network architecting skills would be in high demand to take on these new challenges.
The CCIEs surveyed also noted the importance of unified communications, with 77% saying IP telephony has been the single largest trend over the last five years, while another 47% said unified communications will be a leading trend in the coming five. Meanwhile, another 52% said video would be a leading enterprise green initiative.
An excellent post on the science (and definitely not art) of network troubleshooting on the PacketLife blog last week, resulted in a mini-debate on whether network change andconfiguration management is a lifesaver or a time-sucking burden for network admins. The answer, it appears, is probably somewhere in the middle.
PacketLife blogger Jeremy Stretch runs through his network troubleshooting method, which includes NOT starting the process at Layer One as many do, but also involves detailed recording of problems and their solutions as well as redoing tests numerous times to confirm functionality after the fix is implemented.
The idea of redoing tests for one reader was laughable considering he has to wade through a river of paper work in order to do even one test.
“The increasing need to adhere to strict change control procedures kills the science of troubleshooting. In my world one test would require mounds of paperwork and numerous sign offs. To do my job I’m forced to do things under the table and hope I don’t break anything and call attention to my activities,” wrote the reader, who calls himself/herself PompeyChimes.
Those complaints brought on an outraged response from a reader known as HH.
“For the love of god, use proper change management procedures… Too often are problems caused by hotshot admins who think they know everything,” HH wrote.
Stretch tempers the argument with the following middle road response:
“HH makes a valid point. Change controls are great – IF they’re implemented practically. So long as they leave an engineer enough room to maneuver, they can be an excellent tool to help generate documentation during the troubleshooting process.”
We’ve written a number of pieces on the virtues of change management in virtualization and change management in storage, but much more often than not we hear of the nightmares involved in dealing with change management. The answer probably doesn’t lie in doing away with change control, but instead in implementing procedures that are realistic for the admins carrying them out daily.
Network Computing blogger Howard Marks made some good points recently about why Brocade has struggled to sell the Ethernet networking product line it acquired from Foundry back in the summer of 2008. As Marks points out, Brocade tried to sell sell Foundry products in the same way it has traditionally sold its storage networking products: via OEM agreements with big server and storage vendors like IBM and Dell. But networking pros aren’t much interested in buying networking products from server vendors. They prefer going with someone they know, such as Cisco, ProCurve or… Foundry.
Wall Street has been displeased with Brocade’s Foundry results so far. As Munjal Shah, analyst with Jefferies and Stifel Nicolaus told the Wall Street Journal:
Brocade is facing challenges in integrating the Ethernet [business] as the sales model is different and Ethernet [original equipment manufacturers sales] are slow to materialize. Brocade has solid position in data center and relative valuation is low, but we believe it will take some time to resolve the execution issues.
Brocade has responded by appointing John McHugh as its new chief marketing officer. McHugh is a veteran of HP, where he is credited with starting up the ProCurve division. More recently McHugh was the head of Nortel’s enterprise solutions business. No surprise that he’s jumping ship after the Avaya acquisition. Burnishing the Foundry business appears to be a nice challenge for him.
Marks says Brocade also got away from what made Foundry a modest success in a crowded networking market: good support from sales engineers. Brocade tried to monetize those resources by turning what used to be free support into professional services. This alienated existing customers, apparently. Now Marks says he’s hearing from internal sources that Brocade is going back to the old Foundry approach, which should help it win over some new customers and perhaps retain some existing ones.
Market research firm dell’Oro Group has published its latest quarterly market update on the wireless LAN industry. According to the firm, the market hit an all-time high in the fourth quarter of 2009. The ratification of 802.11n has really set this market on fire. Apparently IT organizations in the retail, education, healthcare and hospitality sectors are all spending a ton of money on new wireless LAN infrastructure right now.
This is driving a lot of revenue growth, but some vendors are reaping the benefits more than others. I asked dell’Oro analyst Loren Shalinsky for detials.
Cisco remains number one in the market by a huge margin, Shalinsky said. But Cisco did not have a good quarter. Its wireless LAN market share shrank by about four points he said, and revenue was down for the quarter (Shalinsky didn’t say by how much).
Motorola had an awesome quarter, growing by 40% sequentially from the third quarter, he said. The growth spurt nearly helped it overtake Aruba Networks as the number two vendor for enterprise wireless LAN. Aruba’s revenue grew by 7% in the same period. Shalinsky said total product revenue for the fourth quarter was $42 million for Aruba and $40.5 million for Motorola. Of course, Aruba would point out that it is also selling quite a few products through it’s OEM relationship with Alcatel-Lucent, which saw its revenue grow by 30%. Alcatel actually overtook Meru Networks in market share and claimed the number five position. (HP ProCurve is holding steady at number 4).
The Ponemon Institute recently surveyed 155 globally certified PCI DSS compliance auditors about how the largest retailers (Tier 1 merchants) are doing with respect to compliance with the credit card industry’s cardholder data security requirements.
Asked by Ponemon to rank the effectiveness of technologies used to protect cardholder data, auditors identified encryption of data at rest and in motion, firewalls and endpoint encryption as the best technologies. Least effective were ID & credentialing systems, intrusion protection and detection systems (IDS and IPS), and website sniffers and crawlers. Ponemon’s research didn’t explain why auditors felt this way about the various technologies. A systems administrator at a nonprofit recently told SearchNetworking.com that his organization is looking at segmenting its network with VLANs to help implement the controls it needs for compliance.
Also, the corporate network is the MOST vulnerable infrastructure element to a potential data breach, auditors said. Fifty-one percent of auditors identified corporate networks as a weak point. Corporate databases (43%) were the second most vulnerable. Only 10% considered unattended payment terminals as a vulnerability.
Ponemon also revealed that the average Tier 1 merchant spend about $225,000 on its compliance audit, but it didn’t identify how much these company’s spend on operations and technology. Auditors said that business units are the most likely (40%) part of a company to be responsible for auditing PCI compliance, but they unlikely to own responsibility for delivering that compliance (19%). IT security (30%) and the office of the CIO (10%) combine to own a plurality of compliance responsibility. This division of responsibility between compliance and auditing could create some tension between IT and business units.
As I was skimming through stories from the RSS feed of a competing publication, I came across these two sequential headlines:
“Want a job? Get a Computer Science Degree”
“Boeing prepares to cut to cut nearly 800 IT workers”
Talk about mixed messages. But that’s what this economy has been giving us for a couple years now, hasn’t it?