It’s not often we’d open an article quoting Einstein. “The significant problems we face cannot be solved by the same level of thinking that created them.”
As a companion, readers may wish to request the Techtarget version of our IT Managers step by step VPN procurement Mindmap.
In other words, if you are considering changing your Global MPLS provider, something must change within your procurement process to avoid the same problems and issues occurring again.
The overall intent of any WAN procurement project is as follows:
- Avoid the typical pitfalls, issues and problems faced by the majority of Enterprise and Medium sized organisations when procuring international MPLS providers
- Align the specifics of your business strategy, applications, business continuity and process to the service provider market place
- Understand the impact of engaging with a provider which doesn’t operate a repeatable process to identify your business needs – the issues are often felt for years
- The Mindmap is designed to help you follow along as you move through the procurement process
Read on to understand some of our thoughts and ideas surrounding specific Global MPLS providers challenges.
The first steps, align your business
We all like to think we know and understand aspects of our business. In many ways, it’s not the knowledge we have but taking that knowledge and ensuring suppliers understand your value and your competitive edge – what makes your business unique. In the respect of Global MPLS network procurement, this means ensuring that any prospective provider clearly relates the specifics of your business to their capability to deliver a Global MPLS network providers service. When IT and business are mis-aligned, the business is seriously impacted. We’ve witnessed clients facing simply frustrating issues such as incorrect billing but also those clients which have suffered huge downtime and productivity issues. If you follow a repeatable process within the international service provides procurement process, you are in a better position to tick off the boxes as you complete the project ensuring good practice and due diligence is adhered to throughout your project.
The workflow for network procurement sounds vast. If you consider everything from application performance through to high availability, adds, moves and changes, delivery and migration, you gain an understanding why some companies just keep plodding on with their existing supplier. However, we know it doesn’t make sense to do more of what simply isn’t working.
Of course, being able to find a perfect provider is most certainly a futile task as there will always be aspects which don’t fit. However, if you know and clearly understand where these weaknesses exist, you are able to either work around them or adapt your business. As an example, you might find a provider which takes time to make bandwidth upgrades. This fact may either be a show stopper of something which you are able to work with depending on the detail.
In short, the areas we consider are as follows:
- Business Strategy
- Business Continuity and DR (Disaster Recovery)
- Documentation and process
- Due Diligence throughout contract
- SLA (Service Level Agreement)
- How to achieve Global and UK proposal and pricing excellence
Becoming a strategic thinker
Strategy is the direct link between your business specifics and your provider. The subject of strategy often conjures up thoughts of huge amounts of work but in reality, and as far as Global MPLS network providers procurement is concerned, strategy is about defining the key areas which make your business successful. We see this time and time again, the businesses which are successful within their particular niche really understand their go-to-market strategy, the areas which result in customer retention.
When we think about strategy, we consider your business and the impact of particular MPLS network areas. As an example, your key sites where you deliver services or data to customers must offer up a capability which contains no single point of failure. Or, perhaps there is a particular application which must perform well – the performance is key to customer satisfaction. Although these aspects will no doubt be covered in the technical design, outlining them and defining these key areas as part of your strategy will have a profound effect on the overall outcome.
Application performance and enhancements
Understanding applications is the basis of productivity for both internal users and customers engaging with your business in various forms. The way in which applications rely on MPLS varies but the key aspects cover latency, jitter, uptime and packet loss. Service providers are able to offer feature rich solutions which include QoS (Quality of Service) to provide confidence in the performance attributes. However, an SLA is only a commercial agreement – the network should never be engineered based on an SLA but there is a good indication of overall performance. The mindmap will also point you to areas which you may not have considered such as the impact of ‘chatty’ applications.
Keeping those applications running with maximum uptime
Clearly, having well performing applications is great from a business perspective. A major part of your applications performance is uptime. We recently wrote an content on the BT RA02 (Resilient Access) product where we described some of the aspects of achieving a solution with no single point of failure. We are pleased with all areas of the mindmap but particularly within the resilience and diversity section. Here, you’re able to easily see the pitfalls and the questions you need to ask of network suppliers ensuring there are no single points of failure within your capability.
Topology, any to any
The native topology of MPLS facilitates any to any connectivity. However, topology is also concerned with restricting access to certain areas of the business. As an example, clients create multiple VPN’s within a single VPN for voice and video to keep them seperate. More than applications, MPLS solutions allow you to create separation for extranet clients – you may have a supplier which requires access to areas of your network such as procurement of goods or services. The topology maybe created to facilitate this capability.
Within topology, you need to consider reach. If your organisation is looking at expanding into particular areas on a global basis, the future reach of your provider becomes a critical aspect. Even if we consider UK clients, opening a data centre in a location which is not well served by a particular provider of choice will impact you in terms of cost and potentially uptime.
Projects which need to be factored
With many organisations, there will always be a future project in the wings. We see this with various departments considering new initiatives or the business as a whole might on an acquisition trail. You may believe this area to be part of strategy but our belief is that these aspects require a section of their own. The situation you need to avoid is one where you put in place a solution which isn’t fit for purpose because the business launches a new initiative or procures another business.
Keeping up to date with documentation and due diligence
One of the major disappointments clients experience relates to poor network documentation and due diligence throughout their contract. We worked with a client recently where their network has not been configured correctly from day 1 with a serious knock on effect to their business. Applications performed but very badly and nobody from the service provider had a good understanding of configuration.
In order to avoid this situation, global MPLS providers need to define how they maintain documentation and also where the documentation is stored to avoid versioning problems. The mindmap will provide details on the areas we recommend you consider.
Throughout contract, due diligence with documentation is important. Just as a repeatable process is required for WAN supplier selection, a workflow is also required to maintain good practice covering such areas as trend reporting and SLA breaches.
Global MPLS Providers & Service Levels, Delivery and Migration
The Global MPLS network mindmap provides a focus on the key areas of service levels including the usual suspects from latency, jitter throughput to uptime and packet loss. Within each of the service areas, we point you to pitfalls and where the service provider marketing may miss out some of the key points. The SLA is a good indication of the providers performance, not only from the perspective of ongoing service but also delivery aspects of the service including adds, moves and changes.
Perhaps one of the biggest areas of weakness surrounds slow and painful adds, moves and changes. Some providers are more agile than others in respect of changes but we find that the process is often improved if the client has a clear understanding of the workflow from raising a change through to placing an order for the change and delivery. Some aspects of changes will be fast, others will take time. Again, which of these aspects that will impact your business will be understood when the specifics of your organisation are aligned with the product.
On the delivery side, the SLA will apply to lead times. There are certainly aspects to consider here which are going to become critical to your project including the actual process to take you from a design through to ordering, acceptance, and circuit delivery milestones. On top of this, you will also want to be thinking about migration and how you will take your service from one provider to another.
Budget, achieving the best international MPLS providers commercials
Obtaining a good deal requires knowledge of process. In addition to our knowledge of MPLS network pricing workflows, the mindmap does a good job of bringing other areas to your attention which make a commercial difference. An example of which is creating your statement of requirements. Any service provider prefers to work on a well qualified set of requirements and presenting your needs in this way has a dramatic effect on how they approach the commercials of your solution. If, for example, you present a list of sites on a spreadsheet, the provider will not take your requirements seriously and will forward out standard pricing.
If you have any further questions, let us know. The organisations on our PDF are also able to offer VPLS providers services.
By Chuck Moozakis
Private equity investment firm M/C Partners has a long pedigree in telecommunications investing. The Boston-based firm, formerly known as Media Communications Partners, has overseen more than $1.5 billion in placements over the past two decades as it focused on companies spawned from the landmark Telecommunications Deregulation Act of 1996.
So, when Managing Partner Gillis Cashman talks about the firm’s latest investment–$50 million equity funding in data center services company Involta—his thoughts bear at least a cursory listen. Why, after previously investing in heavyweight companies that included Metro PCS and Level 3 Communications, does M/C now believe Involta, with just a handful of data centers in towns such as Duluth, Minn., and Marion, Iowa, is a good bet?
The simple answer? Application performance. Or more specifically, the lack thereof.
“There is a view in cloud computing that data centers are now becoming commodities and that proximity doesn’t matter; you can host your servers anywhere,” Cashman said. But concerns about application performance, and to a lesser extent security, are inhibiting cloud’s success, he said. “When you think about application performance, it really requires a different architecture, where you need to get those servers and applications very close to the end user.
“Instead of 50 servers being in a data center in the middle of nowhere, now what you need is 50 servers at 50 data centers close to the edge where the redundancy is in the network itself.”
And, Cashman said, those DCs should be located where the need is greatest: to serve enterprises in communities that are not served by Tier 1 or Tier 2 providers. These companies, he said, still have mission-critical applications, but they can’t get the service-level agreements they need to ensure their employees and customers are getting the application performance they deserve.
“There is far more insourcing going on in smaller markets,” Cashman said. “The reason is they either don’t trust the facilities in the market or there are no facilities in the market, so they are forced to deploy their applications internally.” To target these types of customers, Involta builds a dedicated fiber link from the DC to the enterprise, effectively creating a leased line. “Performance across this network is guaranteed because it never touches the public Internet, and to me, that is a critical factor that will drive more outsourcing [to data centers]. You need to have that infrastructure in place to effectively place these private cloud architectures.”
Ensuring that Acme Manufacturing in central Iowa has the same broadband capability and application performance as XYZ MegaCorp. in New York City is smart business–and as M/C Partners almost surely agrees, it’s good business, too.
The default for the Enterprise is to typically progress their MPLS proposal with the larger end of the market which is understandable. An Enterprise requires the stability of a service provider of equal stature in terms of size to provide comfort in stability. On the flip side, smaller organisations (think SME) are always avoiding the larger service provider in favour of the agility and focus which smaller providers typically offer.
I personally worked for a large service provider in the mid 2000’s and recall a strategy change where the CEO decided to effectively segment their business. In short, the provider decided they were expending way too much of their employees time supporting SME businesses which represented a fraction of their revenue. As a business decision, it was probably the right one to make but I imagine the SME’s being given the news that they were effectively being forced into a different support channel were not impressed. Within the same provider, they also launched a new program of professional services where the large enterprise would be expected to pay for service and project management – i.e. these resources were no longer being provided by default. I’m not judging their decision and in many ways the service and support increased for their Enterprise clients which probably had the budget.
The smaller SME therefore should be wary of entering into contracts with the larger providers since they may not achieve the focus and service of the larger paying clients. I appreciate this is a broad statement to make and larger service providers are making strides into changing how they support the SME market. An an example, BT have launched a specific product which is dedicated to the SME market but the release is early days so we will have to see how things pan out.
Let’s look at some of the comparisons.
Clearly larger service providers have huge revenue streams which offers stability associated with similar institutions to themselves. This said, profitability is still very important as we have witnessed large providers such as WorldCom enter Chapter 11 so size is not always a given from the perspective of stability. However, all things being equal, a large stable company provides long term comfort when signing WAN contracts. The smaller providers are often good profitable organisations but they very much have a shorter way to fall if things should go wrong. We know of companies which are reliant on a few contracts for the source of their income and profitability which clearly is a risk. And there are some which have a good broad range of contracts so are more stable and further along their business growth path. It is also true that smaller providers are more prone to strategy changes. In any given month, they may decide to invest which changes their financial position and increases risk.
Staff and coverage is also an area which requires clarification. Using another example, a provider we worked with under a consultancy arrangement had only two main POP’s (Point of Presence) in the UK with only a few staff. We asked how they would support offices over large distances and they said “We would put replacement hardware in a van and ask one of the engineers to drive it over”. Whilst this approach may work, it’s clearly not a particularly robust support process.
The coverage of a provider is very variable with smaller providers. Our experience ranges from companies with hardware in an office (yes really) through to a couple of core POP’s up to well engineered networks. I always recommend IT Management looking at procurement to clearly understand the true MPLS coverage of services providers.
Over and above coverage, process for adds, moves and changes very much varies when comparing the larger organisations vs the smaller companies in the market place. In my experience, smaller represents agility with larger service providers often creating more bureaucracy.
Over the years there has been on constant split between UK based organisations vs their US counterparts. The US appear to procure their MPLS capability as wires only, i.e. self managed vs the UK’s tendency to outsource.
We are witnessing companies such as BT experience more traction with self managed WAN products such as IP Connect unmanaged which allows clients to procure network connectivity and not management. Note: For UK readers, IP Connect replaces the IP Clear product.
As a rule, service providers are traditionally a little cumbersome to deal with when making adds, moves and changes. in many ways, the lack of agility when making changes is perhaps one of the main reasons why there is so much churn in the industry. In our past life, we assisted organisations with WAN procurement which provided an insight into why businesses were looking to change service provider. One of the main reasons? Making changes to the network took way too long, the change was often incorrect and the documentation reflecting the updated network was poor.
With the above in mind, I would have expected to see less churn in the US market simply because businesses are in a position to make their own changes to the network. I believe this tells us that the frustration with service provider agility is simply one reason why an organisation procures a brand new WAN and the sum reflects a number of issues and problems.
When outsourcing to a service provider, you are effectively reliant on the provider to configure and maintain the capability of your edge routers (and switches potentially). There is an obvious benefit here since outsourcing allows your IT team to focus on other areas of the business rather than the WAN. However, the negative occurs when the provider simply does not act quickly enough and / or misunderstands requirements resulting in incorrect delivery of the change. In my experience of working with and for large service providers, we’ve seen the simplest of changes take weeks which has a profoundly negative effect on the business. When these kind of delays occur, the WAN becomes a bottleneck rather than an enabler. In 2014, you would also expect to see innovations in the field of change requests and for sure we are seeing some improvements. However, the majority of providers are very much still reliant on the same bureaucratic processes. This said, we have seen some real innovations with portals now providing an easy to access method of requesting changes with real time updates as progress is made. These portals generally only support a ‘base’ level of changes though and the more complex or un-productised changes still very much require a manual process.
The clients which adopt an in-house managed service approach are well positioned to make changes as and when required. However, any changes which require the provider to also alter the configuration on their network may still result in similar delays to those experienced with outsourced capability. In general though, straight forward changes are completed quickly as and when required which provides an agility which the service providers generally cannot meet. The negatives revolve around having to ensure your IT staff are well positioned to troubleshoot, configure and maintain the required due diligence required with a corporate network. As MPLS is a provider network protocol (and not deployed on your edge router), the actual configuration required is relatively simple. Self managed networks are more often than not fully monitored by the service provider which is key to leveraging their knowledge and insight into the end to end connectivity.
The last point to make is that outsourcing does not have to be provided by the service provider. There are many opportunities to leverage the ability of niche managed IT companies to look at providing an alternative. As cloud services are becoming more prevalent, IT companies are leveraging their capability to provide a ‘one stop shop’ for managed services. I believe we will see this occur more as we move forward.
Riverbed Technology conducted a short survey of Interop attendees at its booth during the Las Vegas conference in April, but the findings didn’t stay in Vegas. The survey of IT professionals revealed that IT knows that poor performance by mission-critical applications can hurt their businesses, but too few of them are doing anything about it, according to Riverbed.
Riverbed asked 210 Interop attendees about their awareness of the technologies that exist to mitigate app performance problems, and if their company was taking proactive steps to avoid application issues, said Steve Riley, Technical Director for the CTO’s office at Riverbed.The top three causes of performance problems, according to Riverbed’s survey, are insufficient bandwidth, too much latency, and slow servers. But a gap remains between those who are aware of the problem, and those who are actually implementing solutions, Riley said. In fact, the results found that 80% of respondents know that slow business-critical applications can have a moderate to extreme impact on overall business performance, but only 50% of then were actually doing something to solve application performance issues.
While budget constraints are undeniable, Riverbed believes that overall willingness to adopt application delivery and performance management technology that can help is “missing to a certain degree,” he said.
“Seventy percent of respondents said, ‘let’s throw more bandwidth at the problem and see if it goes away, but only 50% of businesses actually have bought more bandwidth,” he said. And 67% of respondents believe that WAN optimization solutions are a good way to combat application performance issues, but only 42% of businesses were using WAN optimization tools. Finally, 52% of respondents believe that they can solve some performance problems by geographically distributing application workloads, but just 28% of businesses have followed through with that method.
So what can businesses with budget constraints do to mitigate application performance problems? First, Riverbed suggests that businesses evaluate their mission-critical apps before buying a bunch of bandwidth. Some applications won’t simply begin performing better with more bandwidth because latency and jitter could still be an issue, Riley said.
“Don’t just guess at what your performance problems are,” he said. “We really suggest that people analyze and diagnose their application problems first.”
Businesses should also think about geographic distribution, Riley said. “Keep in mind that it does take time for applications located in one place to move to another place. With major cloud providers scattered all over the world, multiple instances of workloads is becoming an easier thing for people to do, so [users] can access that data anywhere.”
Last, but certainly not least in the opinion of a vendor focused on IT performance, businesses should also put WAN optimization tools where they make sense. “The whole purpose of that technology is mitigating latency, and allowing end users to access applications as if they are local, even if they are on the other side of the world,” Riley said.
What’s the term for when you’re just throwing things out there to see what works? Spitballing? I’m just spitballing here. I think Juniper Networks should start selling Junos, the operating system for its switches and routers, as a software product.
I believe in branding. If a company has a brand that users like, use that brand. Everyone loves M&Ms and lots of people love Skittles, but you never see an advertisement for their parent company, Mars Inc. Remember when BlackBerrys were a big deal? Research In Motion (RIM) rarely, if ever, tried to market new products under the RIM brand. Heck, the company changed its name to BlackBerry eventually.
Juniper has Junos. It is a great brand. It’s a great piece of technology. Every Juniper customer whom I’ve talked to loves Junos. Sometimes they don’t completely love the hardware Junos runs on, but they always love Junos. To paraphrase one example from a customer: “The Virtual Chassis technology on this particular model of Juniper EX switch is kind of a pain in the neck to work with, but darn it, I love Junos!”
Juniper knows it has a good thing going with this brand. Take a look at its relatively new network management software brand, Junos Space. When Cisco re-branded its network management software, it chose Cisco Prime. It didn’t bother re-purposing NX-OS, the operating system for its Nexus data center switches, as a management software brand. NX-OS Prime? Too many Cisco customers still grouse about the instability they dealt with in early NX-OS code releases back in 2009 and 2010. But customers love Junos, so Juniper extended the brand.
I know what you’re thinking. This is the switch and router industry. These companies don’t sell software. They sell boxes. Big vertically integrated systems with high profit margins that (sometimes) keep shareholders happy. But things change. Cisco built the Nexus 1000v, which looks and feels like a Nexus switch, but serves as a distributed virtual switch on a hypervisor host. Everyone is at least testing a product with Open Virtual Switch (OVS) software in it. Cumulus Networks, Big Switch Networks and Pica8 are all building business around switch software that can run on white box or bare-metal switch hardware.
Why not throw Junos into the mix? How many Juniper customers would like the chance to run a Junos router on an x86 server or regain control over the virtualized access layer of their data center by Junos running on hypervisors? How many cloud providers or Web content providers would try out Junos as an OS for bare-metal switches? I have no idea, but I find the concept interesting.
It will probably never happen, because… shareholders. Cisco would never do this and neither can Juniper. Wall Street wants profit margins. Is it a complete coincidence that after Dell went private, it started selling bare-metal versions of its data center switches with support for Cumulus and Big Switch operating systems?
I’m not so naïve as to think Juniper could just rip Junos out of its system stack and release it as a product. It would probably take a lot of time and money to do such a thing. I might even be dead wrong about releasing Junos as a software product. Maybe people don’t want it in that form. I just think it’s an interesting idea. Last year Juniper memorably announced an enterprise software licensing program, Juniper Software Advantage, but offered absolutely no software products within the licensing regime at the time of the announcement. It was a perplexing move, but months later it moved its security software products into the program. I think people would have really been excited to see Junos available through that licensing scheme. Like I said, I’m just spitballing.
As the large group of IT administrators and engineers filed into the IPv6 session at this year’s Interop Las Vegas, they probably were expecting to hear yet another boring lecture about the importance of upgrading from IPv4.
After all, it’s been 15 years since IPv6 was ratified. And for proponents of the specification, it must seem it will be another 15 before it gets adopted in the majority of enterprise and service provider networks.
These Interop attendees already knew the benefits of IPv6: it is a more efficient transmission method; it supports more robust security standards. And they are well aware that IPv4 is running out of addresses. IPv6, with its 128-bit addressing scheme, is the only logical solution, backers say.
But those reasons alone hadn’t motivated most of the engineers sitting in the Mandalay Bay conference room in Las Vegas to make the move. Despite the benefits of IPv6, making the necessary infrastructure changes to adopt the new standard takes work, time and money. And for most organizations, those three ingredients are always in short supply.
So it was up to speaker Edward Horley to persuade these professionals why it was time to move to IPv6. And he had a very convincing argument: You already have IPv6 in your networks, he said, and if you haven’t properly planned for it, prepare for trouble.
And how did IPv6 sneak in the networks overseen by these engineers, the vast majority of whom said they weren’t running the new protocol? From simple upgrades, Horley said. Upgrades or deployments of such common operating systems as Microsoft Windows Vista, 7, 8 or Server 2008, for example, are all grounded in IPv6, as is Server 2008R2, 2012 and 2012R2.
“You’ve already deployed it. It’s already there, and you better know what it is doing in your network,” said Horley, principal solutions architect at Campbell, Calif.-based Groupware Technology and a long-time IPv6 evangelist. “This is one of the things people don’t understand. It’s on by default. This is your domain of responsibility, and it’s your job to understand it.”
What all of this means, Horley said, is that network administrators better learn, and learn quickly, how IPv6 will affect their operations. “I am here to tell you that you did deploy IPv6, you didn’t do it in an educated way and you need to understand the impact,” he said. “It’s alarming: The vendors did this and they didn’t tell you. Well, it’s easier to adopt and support IPv6 than to run away or ignore it.”
Horley didn’t sugarcoat the challenges administrators face. The transition, he said, “will be ugly for everyone because we’ve been kicking the can for 10 years.” As a result, carriers have resorted to tactics such as carrier grade network address transition (CGN) to mitigate the exhaustion in IPv4 addresses. But there are serious shortcomings to that technique, especially for websites such as Google Maps that require hundreds of sessions to complete. And dual-stack, a tool that permits the support of both protocols, doesn’t hold a long-term answer either, Horley said.
“Six solves these problems,” he said.
But wait. As the informercial announcers like to say, there’s more.
Further delay in migrating to IPv6 will also begin to seriously impact such elemental services as VPNs, VoIP and Session Initiation Protocol-based operations, Horley said.
Bottom line? “You don’t have to start tomorrow, but you do have to start thinking about it,” Horley said, adding yet another reason why administrators can’t wait: the end of Windows XP support. As that popular OS gets phased out at enterprises, three guesses at what’s waiting in the wings.
If it makes you feel any better, organizations spent more than $12 billion on firewall, intrusion prevention, endpoint protection and secure Web gateway products last year. That’s just a drop in the tens of billions of dollars enterprises spent overall in the past 12 months to protect their digital assets.
Alas, it’s not nearly enough–as recent data breaches at Target and Neiman Marcus have illustrated.
And the best (that is, worst) is yet to come.
“I really think we are looking at some new aspects” in malware and enterprise vulnerabilities, said Gartner Research Director Eric Ahlm at a McAfee data protection webinar held in mid-January. “There is a change in the threat landscape.”
Among the changes: User-based attacks are becoming easier and targeted attacks have become much more intelligent.
“Being able to prevent is much more of a challenge,” Ahlm said.
At the same time, hackers have a well-oiled ecosystem, whether they are organized state agents or solitary data thieves who can easily tap into a willing market in which to sell their stolen information.
But wait. There’s more: The continued growth of mobile devices is bringing with it some especially sobering security trends, according to Gartner, including the following:
–By 2018, 25% of corporate data (compared with 4% today) will bypass perimeter security and flow directly from mobile devices to the cloud.
–Through 2017, 75% of mobile security breaches will be a result of mobile application misconfigurations.
“If we’ve lost our control plane and lost our visibility plane, it’s going to make [asset protection] much more challenging,” Ahlm said.
That said, not all is gloom and doom. Adaptive, rather than preventive, security will become an important weapon in enterprise security arsenals.
“We need to be able to find compromised systems and know what methods we have to find these systems,” Ahlm said, adding that a security strategy anchored by situational and contextual awareness platforms will be critical.
“Security teams need to hunt and they need to look. Knowing what’s involved and what’s in play will be vital in building programs that succeed.”
–Use network analysis in conjunction with global threat intelligence feeds to determine if a system is under a hacker’s control.
–Correlate internal information such as network logs, network behaviors, host behaviors and user importance. That situational awareness can help organizations prioritize and triage in the wake of a data breach, Ahlm said.
Say bye to information technology. Say hello to enterprise technology.
So says Nemertes Research President Johna Till Johnson. In a discussion highlighting Nemertes’ 2013-2014 Enterprise Technology Benchmark study earlier this summer, Johnson said the shift from IT to ET will be no less dramatic than the transition from MIS to IT 30 years ago. It’s a swing, she said, that will have a big impact on IT professionals.
“In a nutshell, what we are seeing is that IT is now being asked to be a trusted adviser to drive the business,” she said. “IT practitioners are now being asked to move into an enterprise technology role,” supporting and guiding the entire business.
Fueling the shift: the rise of the remote worker, untethered from the office and free from the physical network. Employees, Johnson said, “are not at their desks; they are out serving customers, taking orders.” The result: Instead of networking knowledge workers, administrators today must network the broader enterprise.
Fortunately for IT executives, COOs and CEOs appear to be actively soliciting their advice on how ET can be made a reality. Nemertes’ research found that 73% of CIOs responding to their survey have been asked to participate in an ET transformation project. Only 13% of CIOs gave the same answer in Nemertes’ 2012 survey. That’s approximately the same result Cisco found in its 2013 Global IT Impact Survey, which noted that a nine out of 10 IT execs collaborate with corporate brass at least on a monthly basis to coordinate strategic initiatives.
That’s the good news. The challenge: Becoming ET-savvy won’t come without a hitch. Where IT is all about getting information transmitted from point “A” to point “B,” ET is understanding how that conveyance helps the organization innovate its operations. Or, as Johnson described it, “IT is about getting the trains to run on time; innovation is about disrupting the existing process, so we are seeing the concrete impact of the innovator’s dilemma”—where companies risk their own survival by failing to adopt technologies or strategies that will meet their customers’ future needs. Orchestrating that shift successfully “will have a huge impact for us who work in the tech field,” she said.
So get ready for ET. It will be here sooner than you might think.
On Monday Cisco’s carefully curated press program at Cisco Live Orlando focused on the refresh of the company’s campus switching portfolio, most notably with the new Catalyst 6800 series of chassis switches whose supervisor modules and line cards are backward and forward-compilable with the Catalyst 6500.
Meanwhile, Cisco Live attendees got a special treat during a keynote presentation by David Yen, vice president and general manager of Cisco’s data center technology group. Yen shared the stage with the a very tall member of the new Nexus 7700 series of data center core switches. This new family of switches have up to 83 Tbps of total switch capacity, with 1.3 Tbps of throughput on each slot. A fully populated top-of-the-line Nexus 7700 can support 384×40 Gigabit Ethernet (GbE) or 192×100 GbE ports. The switch had attendees swooning.
Holy cow!! The new Nexus 7700 looks awesome & the throughput made my jaw drop! In addition F3 line cards! Literally blowing my mind #clus
— nateb72 (@nateb72) June 24, 2013
Cisco Nexus 7700. 42tb of switching capacity? Holy hell that’s a lot of capacity. 1.3tb/slot #clus
— Ian Underwood (@iunderwood) June 24, 2013
When details of this monster switch leaked out over Twitter, I asked the Cisco PR team about it. They were tight-lipped. The official announcement of this switch is scheduled to take place during a Wednesday press conference, and Cisco isn’t prepared to offer details to the general public yet. Some content about the switch appeared on Cisco’s web site recently, but it was scrubbed when some network engineers stumbled upon it.
Given that Insieme Networks executives are scheduled to participate in the Wednesday press conference, I expect the news will be much bigger than just the Nexus 7700. My sources have told me that Insieme has been developing a massive fabric controller that can orchestrate the entire data center, not just the network. Rumors have persisted that the stealth Cisco spin-in is deeply involved in software-defined networking, too. But I’ve never been able to confirm that rumor.
I’ve heard Cisco executives boasting that this year’s Cisco Live will feature one of the biggest collections of major new product announcements in years. The Catalyst 6800 was a good start and the Nexus 7700 was impressive, but I think there’s more to come.