I am being asked on a fairly regular basis if I’m aware of a top 5 WAN services provider list for global and UK providers. http://telecoms.wiki has created a PDF list which includes two global providers, and three national providers for both MPLS and VPLS. Request your copy here for global or here for UK without being added to marketing lists, all data is kept completely confidential.
The PDF represents those providers telecoms.wiki have worked with since 2009 which should provide a good start to your project. They have experience of each provider and will continuously add to the list as more are recommended based on real world experience.
Additionally, access our Techtarget WAN providers Mindmap where we distill all of our experience over 15 years including live ongoing opportunities. We outline the wins, risks, failures with updates every single month. A living document with the objective of explaining what the top performing companies are considering and the typical areas which are problematic. In our experience, there is only a small percentage of organisations which procure WAN services with success – I would estimate maybe 10% as an approximate guide. Our insights arrived from a series of projects which we assigned to a regular research activity; we explored enterprise architecture from 2001 to 2009, around eighteen or so firms. Today our research is updated monthly using live BT projects. In the interest of full disclosure, I now run a BT Authorised Business but prior to this position, I assisted global and UK organisations with WAN procurement hence why we are often asked for a WAN service providers list and associated supporting info. The guide is written for all executives which wonder why some firms manage to do well in the field of WAN procurement whilst other suffer major issues and problems.
Now, if you are reading this article, I trust you have an affinity for a fit for purpose design and proposal. And, when I say design and proposal, I mean more than just pricing. With this said, if you are unsure before you move forward with requesting proposals, read on to understand a few key vectors to consider which we hope will support your PDF list of WAN service providers for the UK and Global markets.
When considering the details, there are a number of key questions we should ask because there are so many key areas to be considered but, unfortunately, there is no silver bullet. Essentially, there is no one WAN provider to fit all requirements which is why your organisation is uniquely aligned to certain providers – you just have to find out which one/s. (Easy, right?)
Let’s be clear. A list is a good thing, a top guide to anything is a starting point to begin considering which global or UK MPLS or VPLS service providers you are potentially engaging. An IT Manager armed with a list will typically follow one of three paths at this early stage.
1. Create a basic requirements list and send out an MPLS or VPLS RFP or spreadsheet. This is probably the most common approach and typically consists of a template, a list of some basic bandwidths in the form of an Excel spreadsheet. Perhaps the biggest risk to organisations is the commodity based decision making process. The replies received from your WAN service providers list will no doubt consist of copy and paste content which will not be specifically related to your needs. The sales process which is started in this way will normally continue without too much focus on meeting your specific goals. In the absence of value, price is all you have left which results in some typical issues and problems we see in the market place.
2. Use a guide (i.e. our Techtarget guide). We created the step by step service providers procurement guide (mentioned earlier) which is based on years of experience working for and with large telco’s. Our A2 Mindmap is updated on the last Friday of every month and represents the sum of our knowledge as we grow based on real world experience. The actual Mindmap is a companion to our book which was written to help IT managers achieve excellence within the MPLS, VPLS and VLL procurement fields.
Using a repeatable process such as our Mindmap is an ideal way to align your organisation on the right path to procurement success. The objective is to outline the success and failure with each and every project and distill them down into an easy to consume single A2 document.
3. Use an outside consultant. As with any professional services, it is important to understand whether the consultancy is aligned with your objectives. We know of several organisations here in the UK which operate with dramatically different workflows. One is very focused on price above all else – they request a percentage of the savings which ensures high motivation towards price reduction. Others are more focussed on the value proposition and helping you to understand strengths and weaknesses. We have yet to find a consultancy which attempts to really align business strategy with the capability of providers. There are elements of this approach but nothing which really has a real focus. Clearly, using a consultancy also has an additional commercial impact.
In my opinion and experience, these are typically the main three options.
What is the impact of poor decision making process?
In some cases, mild frustration. In other examples, major business disruption and loss of revenue. One of the major risks with any sales process is the ‘features and benefits’ approach. We were all taught to sell on features and benefits and clearly this method of understanding a feature vs the benefits allows us to understand the impact of our purchase. However, this sales approach doesn’t typically perform a great job of understanding your specific business.
Good WAN services proposals are created using knowledge and experience. In some cases, telecoms products and services are relatively straight forward but in the case of your WAN, careful thought must be given to avoid a poor outcome. There are multiple different areas we cover within our Mindmap guide from users through to cloud based computing and business strategy. The WAN has been an important key aspect to any business but today, the reliance is becoming ever more critical with global working, BYOD (Bring your own device), regulations, customer service, extranet access all impacted by our service provider decision making process.
Fundamentally, the WAN is about people. Whether we are talking about the end users, extranet clients, customers or partners. The result is the same, the user experience is key. The way in which users interact with the WAN may change on an ongoing basis as the result of business strategy. For example, the acquisition of other companies creates a major impact on your connectivity from the perspective of added load, new applications, new workflows and, of course, new locations. If you believe your organisation may well be looking to acquire a new business, you need to be sure any potential provider has good coverage for locations of interest. Other common strategy elements include business expansion or downsizing.
So, the WAN architecture is generally dictated by the specific user needs. Whether you opt for a centralised or a decentralised environment is largely dependent on users which will dictate application flow. In some cases, local applications are more beneficial than reaching applications via the cloud. This is becoming a rare occurrence but the point remains, the interaction with the WAN will dictate the architecture.
Avoiding a silo approach
We find many procurement projects are unsuccessful because business issues and drivers are effectively filtered off using a silo. The ‘silo’ is really the typical approach within IT departments today where the combined effort of IT teams work fine but projects, issues and pain points all exist individually and are not particularly well defined. In most SMB organisations, the silo is less of a problem since the overall business is not particularly complex. As a business increases in size to that of an Enterprise, the impact is significant with many IT silo’s which are not particularly well connected. The key to successful WAN networks from the angle of procurement is to present the technology as an enabler to the business which requires aligning all key areas from applications through to business strategy and much more. Without this kind of process, the WAN is often a bottleneck with even routine activities impacted by a mis-aligned provider.
Let’s look at a few examples.
Apps are essentially the very reason WAN services exist. We advise creating thorough documentation of your applications using a hierarchal definition.
High priority: Voice, video
Medium priority: Mission critical apps such as Citrix
Low priority: Internet, email
Understand your applications includes both considering how critical they are to the business but also their specific attributes and usage. Allowing some time to really focus on flow between user and host together with the existing capability of applications to serve the business is critically important. When looking at the existing situation, look at previous outages, issues and performance to provide you with insight into how the capability might be improved. We worked with a client recently where their key applications were considered to be ‘chatty’. This essentially means each packet requires an acknowledgement that delivery has been completed without any packet loss or other issues. In theory, this process sounds sensible but consider that a network with high latency would create slow performance as each acknowledgement would need to travel across the network. The way around this is to consider application performance enhancement which allows the packets to be locally acknowledged. Using Visio to show a per application flow, performance (latency and jitter), downtime and required service levels is a sensible approach.
Diversity and uptime
Leading on from applications is how we maintain network uptime. In general, when designing networks for resiliency, there is a balance between budget and desired design. Consider how downtime for your key applications may impact the business vs the outlay of a full diverse design. The network architecture for diversity is also an area where many procurement projects encounter issues. Another recent example involved an organisation which was advised to use two tail circuit providers for primary and failover as this would ultimately provide the best possible uptime. However, this is never the case because each tail circuit provider does not have access to their counterparts circuit planning. In other words, there may well be multiple points of failure throughout the path to the network. In the UK, wholesale providers such as Open Reach provide specific products designed to avoid any single point of failure.
Applications and QoS
The feature of QoS (Quality of Service) is to prioritise applications with the benefit resulting in consistent and predictable performance. However, what if the distance between two of your locations is simply to far from a latency perspective which will have a detrimental impact to your application regardless of QoS? In this respect, it is important to not only understand QoS and how this might benefit your applications but also other mitigating factors which surround this feature set.
I hope we have highlighted the importance of aligning your requirements with the capability of your WAN services provider within these few examples. The only sensible method of achieving a good outcome is a well thought out approach which your organisation must dictate when dealing with the service provider OR the sales team from the provider must demonstrate their capability to diagnose your requirements. If you experience selling which is simply based on yet another product feature or benefit, your decision making process will more than likely result in a commodity based selection.
As a reminder, the Network Union Mindmap offered within this blog post will help to analyse your current situation and then look at the portfolio of products and services offered by telco’s, VNO’s or resellers allowing your organisation to identify the best way forward. The early development of our material began in 2001 as clients were moving away from Frame Relay and ATM services. With the market expanding usage of WAN services to include remote home working, extranet clients and applications such as voice and video the WAN became more critical and therefore procurement became more important to ensure businesses were well supported.
We have worked hard to embed our diagnostic approach into a single ‘at a glance mindmap’ and would appreciate the feedback if you decide to request a copy.
By Chuck Moozakis
Private equity investment firm M/C Partners has a long pedigree in telecommunications investing. The Boston-based firm, formerly known as Media Communications Partners, has overseen more than $1.5 billion in placements over the past two decades as it focused on companies spawned from the landmark Telecommunications Deregulation Act of 1996.
So, when Managing Partner Gillis Cashman talks about the firm’s latest investment–$50 million equity funding in data center services company Involta—his thoughts bear at least a cursory listen. Why, after previously investing in heavyweight companies that included Metro PCS and Level 3 Communications, does M/C now believe Involta, with just a handful of data centers in towns such as Duluth, Minn., and Marion, Iowa, is a good bet?
The simple answer? Application performance. Or more specifically, the lack thereof.
“There is a view in cloud computing that data centers are now becoming commodities and that proximity doesn’t matter; you can host your servers anywhere,” Cashman said. But concerns about application performance, and to a lesser extent security, are inhibiting cloud’s success, he said. “When you think about application performance, it really requires a different architecture, where you need to get those servers and applications very close to the end user.
“Instead of 50 servers being in a data center in the middle of nowhere, now what you need is 50 servers at 50 data centers close to the edge where the redundancy is in the network itself.”
And, Cashman said, those DCs should be located where the need is greatest: to serve enterprises in communities that are not served by Tier 1 or Tier 2 providers. These companies, he said, still have mission-critical applications, but they can’t get the service-level agreements they need to ensure their employees and customers are getting the application performance they deserve.
“There is far more insourcing going on in smaller markets,” Cashman said. “The reason is they either don’t trust the facilities in the market or there are no facilities in the market, so they are forced to deploy their applications internally.” To target these types of customers, Involta builds a dedicated fiber link from the DC to the enterprise, effectively creating a leased line. “Performance across this network is guaranteed because it never touches the public Internet, and to me, that is a critical factor that will drive more outsourcing [to data centers]. You need to have that infrastructure in place to effectively place these private cloud architectures.”
Ensuring that Acme Manufacturing in central Iowa has the same broadband capability and application performance as XYZ MegaCorp. in New York City is smart business–and as M/C Partners almost surely agrees, it’s good business, too.
The default for the Enterprise is to typically progress their MPLS proposal with the larger end of the market which is understandable. An Enterprise requires the stability of a service provider of equal stature in terms of size to provide comfort in stability. On the flip side, smaller organisations (think SME) are always avoiding the larger service provider in favour of the agility and focus which smaller providers typically offer.
I personally worked for a large service provider in the mid 2000’s and recall a strategy change where the CEO decided to effectively segment their business. In short, the provider decided they were expending way too much of their employees time supporting SME businesses which represented a fraction of their revenue. As a business decision, it was probably the right one to make but I imagine the SME’s being given the news that they were effectively being forced into a different support channel were not impressed. Within the same provider, they also launched a new program of professional services where the large enterprise would be expected to pay for service and project management – i.e. these resources were no longer being provided by default. I’m not judging their decision and in many ways the service and support increased for their Enterprise clients which probably had the budget.
The smaller SME therefore should be wary of entering into contracts with the larger providers since they may not achieve the focus and service of the larger paying clients. I appreciate this is a broad statement to make and larger service providers are making strides into changing how they support the SME market. An an example, BT have launched a specific product which is dedicated to the SME market but the release is early days so we will have to see how things pan out.
Let’s look at some of the comparisons.
Clearly larger service providers have huge revenue streams which offers stability associated with similar institutions to themselves. This said, profitability is still very important as we have witnessed large providers such as WorldCom enter Chapter 11 so size is not always a given from the perspective of stability. However, all things being equal, a large stable company provides long term comfort when signing WAN contracts. The smaller providers are often good profitable organisations but they very much have a shorter way to fall if things should go wrong. We know of companies which are reliant on a few contracts for the source of their income and profitability which clearly is a risk. And there are some which have a good broad range of contracts so are more stable and further along their business growth path. It is also true that smaller providers are more prone to strategy changes. In any given month, they may decide to invest which changes their financial position and increases risk.
Staff and coverage is also an area which requires clarification. Using another example, a provider we worked with under a consultancy arrangement had only two main POP’s (Point of Presence) in the UK with only a few staff. We asked how they would support offices over large distances and they said “We would put replacement hardware in a van and ask one of the engineers to drive it over”. Whilst this approach may work, it’s clearly not a particularly robust support process.
The coverage of a provider is very variable with smaller providers. Our experience ranges from companies with hardware in an office (yes really) through to a couple of core POP’s up to well engineered networks. I always recommend IT Management looking at procurement to clearly understand the true MPLS coverage of services providers.
Over and above coverage, process for adds, moves and changes very much varies when comparing the larger organisations vs the smaller companies in the market place. In my experience, smaller represents agility with larger service providers often creating more bureaucracy.
Over the years there has been on constant split between UK based organisations vs their US counterparts. The US appear to procure their MPLS capability as wires only, i.e. self managed vs the UK’s tendency to outsource.
We are witnessing companies such as BT experience more traction with self managed WAN products such as IP Connect unmanaged which allows clients to procure network connectivity and not management. Note: For UK readers, IP Connect replaces the IP Clear product.
As a rule, service providers are traditionally a little cumbersome to deal with when making adds, moves and changes. in many ways, the lack of agility when making changes is perhaps one of the main reasons why there is so much churn in the industry. In our past life, we assisted organisations with WAN procurement which provided an insight into why businesses were looking to change service provider. One of the main reasons? Making changes to the network took way too long, the change was often incorrect and the documentation reflecting the updated network was poor.
With the above in mind, I would have expected to see less churn in the US market simply because businesses are in a position to make their own changes to the network. I believe this tells us that the frustration with service provider agility is simply one reason why an organisation procures a brand new WAN and the sum reflects a number of issues and problems.
When outsourcing to a service provider, you are effectively reliant on the provider to configure and maintain the capability of your edge routers (and switches potentially). There is an obvious benefit here since outsourcing allows your IT team to focus on other areas of the business rather than the WAN. However, the negative occurs when the provider simply does not act quickly enough and / or misunderstands requirements resulting in incorrect delivery of the change. In my experience of working with and for large service providers, we’ve seen the simplest of changes take weeks which has a profoundly negative effect on the business. When these kind of delays occur, the WAN becomes a bottleneck rather than an enabler. In 2014, you would also expect to see innovations in the field of change requests and for sure we are seeing some improvements. However, the majority of providers are very much still reliant on the same bureaucratic processes. This said, we have seen some real innovations with portals now providing an easy to access method of requesting changes with real time updates as progress is made. These portals generally only support a ‘base’ level of changes though and the more complex or un-productised changes still very much require a manual process.
The clients which adopt an in-house managed service approach are well positioned to make changes as and when required. However, any changes which require the provider to also alter the configuration on their network may still result in similar delays to those experienced with outsourced capability. In general though, straight forward changes are completed quickly as and when required which provides an agility which the service providers generally cannot meet. The negatives revolve around having to ensure your IT staff are well positioned to troubleshoot, configure and maintain the required due diligence required with a corporate network. As MPLS is a provider network protocol (and not deployed on your edge router), the actual configuration required is relatively simple. Self managed networks are more often than not fully monitored by the service provider which is key to leveraging their knowledge and insight into the end to end connectivity.
The last point to make is that outsourcing does not have to be provided by the service provider. There are many opportunities to leverage the ability of niche managed IT companies to look at providing an alternative. As cloud services are becoming more prevalent, IT companies are leveraging their capability to provide a ‘one stop shop’ for managed services. I believe we will see this occur more as we move forward.
Riverbed Technology conducted a short survey of Interop attendees at its booth during the Las Vegas conference in April, but the findings didn’t stay in Vegas. The survey of IT professionals revealed that IT knows that poor performance by mission-critical applications can hurt their businesses, but too few of them are doing anything about it, according to Riverbed.
Riverbed asked 210 Interop attendees about their awareness of the technologies that exist to mitigate app performance problems, and if their company was taking proactive steps to avoid application issues, said Steve Riley, Technical Director for the CTO’s office at Riverbed.The top three causes of performance problems, according to Riverbed’s survey, are insufficient bandwidth, too much latency, and slow servers. But a gap remains between those who are aware of the problem, and those who are actually implementing solutions, Riley said. In fact, the results found that 80% of respondents know that slow business-critical applications can have a moderate to extreme impact on overall business performance, but only 50% of then were actually doing something to solve application performance issues.
While budget constraints are undeniable, Riverbed believes that overall willingness to adopt application delivery and performance management technology that can help is “missing to a certain degree,” he said.
“Seventy percent of respondents said, ‘let’s throw more bandwidth at the problem and see if it goes away, but only 50% of businesses actually have bought more bandwidth,” he said. And 67% of respondents believe that WAN optimization solutions are a good way to combat application performance issues, but only 42% of businesses were using WAN optimization tools. Finally, 52% of respondents believe that they can solve some performance problems by geographically distributing application workloads, but just 28% of businesses have followed through with that method.
So what can businesses with budget constraints do to mitigate application performance problems? First, Riverbed suggests that businesses evaluate their mission-critical apps before buying a bunch of bandwidth. Some applications won’t simply begin performing better with more bandwidth because latency and jitter could still be an issue, Riley said.
“Don’t just guess at what your performance problems are,” he said. “We really suggest that people analyze and diagnose their application problems first.”
Businesses should also think about geographic distribution, Riley said. “Keep in mind that it does take time for applications located in one place to move to another place. With major cloud providers scattered all over the world, multiple instances of workloads is becoming an easier thing for people to do, so [users] can access that data anywhere.”
Last, but certainly not least in the opinion of a vendor focused on IT performance, businesses should also put WAN optimization tools where they make sense. “The whole purpose of that technology is mitigating latency, and allowing end users to access applications as if they are local, even if they are on the other side of the world,” Riley said.
What’s the term for when you’re just throwing things out there to see what works? Spitballing? I’m just spitballing here. I think Juniper Networks should start selling Junos, the operating system for its switches and routers, as a software product.
I believe in branding. If a company has a brand that users like, use that brand. Everyone loves M&Ms and lots of people love Skittles, but you never see an advertisement for their parent company, Mars Inc. Remember when BlackBerrys were a big deal? Research In Motion (RIM) rarely, if ever, tried to market new products under the RIM brand. Heck, the company changed its name to BlackBerry eventually.
Juniper has Junos. It is a great brand. It’s a great piece of technology. Every Juniper customer whom I’ve talked to loves Junos. Sometimes they don’t completely love the hardware Junos runs on, but they always love Junos. To paraphrase one example from a customer: “The Virtual Chassis technology on this particular model of Juniper EX switch is kind of a pain in the neck to work with, but darn it, I love Junos!”
Juniper knows it has a good thing going with this brand. Take a look at its relatively new network management software brand, Junos Space. When Cisco re-branded its network management software, it chose Cisco Prime. It didn’t bother re-purposing NX-OS, the operating system for its Nexus data center switches, as a management software brand. NX-OS Prime? Too many Cisco customers still grouse about the instability they dealt with in early NX-OS code releases back in 2009 and 2010. But customers love Junos, so Juniper extended the brand.
I know what you’re thinking. This is the switch and router industry. These companies don’t sell software. They sell boxes. Big vertically integrated systems with high profit margins that (sometimes) keep shareholders happy. But things change. Cisco built the Nexus 1000v, which looks and feels like a Nexus switch, but serves as a distributed virtual switch on a hypervisor host. Everyone is at least testing a product with Open Virtual Switch (OVS) software in it. Cumulus Networks, Big Switch Networks and Pica8 are all building business around switch software that can run on white box or bare-metal switch hardware.
Why not throw Junos into the mix? How many Juniper customers would like the chance to run a Junos router on an x86 server or regain control over the virtualized access layer of their data center by Junos running on hypervisors? How many cloud providers or Web content providers would try out Junos as an OS for bare-metal switches? I have no idea, but I find the concept interesting.
It will probably never happen, because… shareholders. Cisco would never do this and neither can Juniper. Wall Street wants profit margins. Is it a complete coincidence that after Dell went private, it started selling bare-metal versions of its data center switches with support for Cumulus and Big Switch operating systems?
I’m not so naïve as to think Juniper could just rip Junos out of its system stack and release it as a product. It would probably take a lot of time and money to do such a thing. I might even be dead wrong about releasing Junos as a software product. Maybe people don’t want it in that form. I just think it’s an interesting idea. Last year Juniper memorably announced an enterprise software licensing program, Juniper Software Advantage, but offered absolutely no software products within the licensing regime at the time of the announcement. It was a perplexing move, but months later it moved its security software products into the program. I think people would have really been excited to see Junos available through that licensing scheme. Like I said, I’m just spitballing.
As the large group of IT administrators and engineers filed into the IPv6 session at this year’s Interop Las Vegas, they probably were expecting to hear yet another boring lecture about the importance of upgrading from IPv4.
After all, it’s been 15 years since IPv6 was ratified. And for proponents of the specification, it must seem it will be another 15 before it gets adopted in the majority of enterprise and service provider networks.
These Interop attendees already knew the benefits of IPv6: it is a more efficient transmission method; it supports more robust security standards. And they are well aware that IPv4 is running out of addresses. IPv6, with its 128-bit addressing scheme, is the only logical solution, backers say.
But those reasons alone hadn’t motivated most of the engineers sitting in the Mandalay Bay conference room in Las Vegas to make the move. Despite the benefits of IPv6, making the necessary infrastructure changes to adopt the new standard takes work, time and money. And for most organizations, those three ingredients are always in short supply.
So it was up to speaker Edward Horley to persuade these professionals why it was time to move to IPv6. And he had a very convincing argument: You already have IPv6 in your networks, he said, and if you haven’t properly planned for it, prepare for trouble.
And how did IPv6 sneak in the networks overseen by these engineers, the vast majority of whom said they weren’t running the new protocol? From simple upgrades, Horley said. Upgrades or deployments of such common operating systems as Microsoft Windows Vista, 7, 8 or Server 2008, for example, are all grounded in IPv6, as is Server 2008R2, 2012 and 2012R2.
“You’ve already deployed it. It’s already there, and you better know what it is doing in your network,” said Horley, principal solutions architect at Campbell, Calif.-based Groupware Technology and a long-time IPv6 evangelist. “This is one of the things people don’t understand. It’s on by default. This is your domain of responsibility, and it’s your job to understand it.”
What all of this means, Horley said, is that network administrators better learn, and learn quickly, how IPv6 will affect their operations. “I am here to tell you that you did deploy IPv6, you didn’t do it in an educated way and you need to understand the impact,” he said. “It’s alarming: The vendors did this and they didn’t tell you. Well, it’s easier to adopt and support IPv6 than to run away or ignore it.”
Horley didn’t sugarcoat the challenges administrators face. The transition, he said, “will be ugly for everyone because we’ve been kicking the can for 10 years.” As a result, carriers have resorted to tactics such as carrier grade network address transition (CGN) to mitigate the exhaustion in IPv4 addresses. But there are serious shortcomings to that technique, especially for websites such as Google Maps that require hundreds of sessions to complete. And dual-stack, a tool that permits the support of both protocols, doesn’t hold a long-term answer either, Horley said.
“Six solves these problems,” he said.
But wait. As the informercial announcers like to say, there’s more.
Further delay in migrating to IPv6 will also begin to seriously impact such elemental services as VPNs, VoIP and Session Initiation Protocol-based operations, Horley said.
Bottom line? “You don’t have to start tomorrow, but you do have to start thinking about it,” Horley said, adding yet another reason why administrators can’t wait: the end of Windows XP support. As that popular OS gets phased out at enterprises, three guesses at what’s waiting in the wings.
If it makes you feel any better, organizations spent more than $12 billion on firewall, intrusion prevention, endpoint protection and secure Web gateway products last year. That’s just a drop in the tens of billions of dollars enterprises spent overall in the past 12 months to protect their digital assets.
Alas, it’s not nearly enough–as recent data breaches at Target and Neiman Marcus have illustrated.
And the best (that is, worst) is yet to come.
“I really think we are looking at some new aspects” in malware and enterprise vulnerabilities, said Gartner Research Director Eric Ahlm at a McAfee data protection webinar held in mid-January. “There is a change in the threat landscape.”
Among the changes: User-based attacks are becoming easier and targeted attacks have become much more intelligent.
“Being able to prevent is much more of a challenge,” Ahlm said.
At the same time, hackers have a well-oiled ecosystem, whether they are organized state agents or solitary data thieves who can easily tap into a willing market in which to sell their stolen information.
But wait. There’s more: The continued growth of mobile devices is bringing with it some especially sobering security trends, according to Gartner, including the following:
–By 2018, 25% of corporate data (compared with 4% today) will bypass perimeter security and flow directly from mobile devices to the cloud.
–Through 2017, 75% of mobile security breaches will be a result of mobile application misconfigurations.
“If we’ve lost our control plane and lost our visibility plane, it’s going to make [asset protection] much more challenging,” Ahlm said.
That said, not all is gloom and doom. Adaptive, rather than preventive, security will become an important weapon in enterprise security arsenals.
“We need to be able to find compromised systems and know what methods we have to find these systems,” Ahlm said, adding that a security strategy anchored by situational and contextual awareness platforms will be critical.
“Security teams need to hunt and they need to look. Knowing what’s involved and what’s in play will be vital in building programs that succeed.”
–Use network analysis in conjunction with global threat intelligence feeds to determine if a system is under a hacker’s control.
–Correlate internal information such as network logs, network behaviors, host behaviors and user importance. That situational awareness can help organizations prioritize and triage in the wake of a data breach, Ahlm said.
Say bye to information technology. Say hello to enterprise technology.
So says Nemertes Research President Johna Till Johnson. In a discussion highlighting Nemertes’ 2013-2014 Enterprise Technology Benchmark study earlier this summer, Johnson said the shift from IT to ET will be no less dramatic than the transition from MIS to IT 30 years ago. It’s a swing, she said, that will have a big impact on IT professionals.
“In a nutshell, what we are seeing is that IT is now being asked to be a trusted adviser to drive the business,” she said. “IT practitioners are now being asked to move into an enterprise technology role,” supporting and guiding the entire business.
Fueling the shift: the rise of the remote worker, untethered from the office and free from the physical network. Employees, Johnson said, “are not at their desks; they are out serving customers, taking orders.” The result: Instead of networking knowledge workers, administrators today must network the broader enterprise.
Fortunately for IT executives, COOs and CEOs appear to be actively soliciting their advice on how ET can be made a reality. Nemertes’ research found that 73% of CIOs responding to their survey have been asked to participate in an ET transformation project. Only 13% of CIOs gave the same answer in Nemertes’ 2012 survey. That’s approximately the same result Cisco found in its 2013 Global IT Impact Survey, which noted that a nine out of 10 IT execs collaborate with corporate brass at least on a monthly basis to coordinate strategic initiatives.
That’s the good news. The challenge: Becoming ET-savvy won’t come without a hitch. Where IT is all about getting information transmitted from point “A” to point “B,” ET is understanding how that conveyance helps the organization innovate its operations. Or, as Johnson described it, “IT is about getting the trains to run on time; innovation is about disrupting the existing process, so we are seeing the concrete impact of the innovator’s dilemma”—where companies risk their own survival by failing to adopt technologies or strategies that will meet their customers’ future needs. Orchestrating that shift successfully “will have a huge impact for us who work in the tech field,” she said.
So get ready for ET. It will be here sooner than you might think.
On Monday Cisco’s carefully curated press program at Cisco Live Orlando focused on the refresh of the company’s campus switching portfolio, most notably with the new Catalyst 6800 series of chassis switches whose supervisor modules and line cards are backward and forward-compilable with the Catalyst 6500.
Meanwhile, Cisco Live attendees got a special treat during a keynote presentation by David Yen, vice president and general manager of Cisco’s data center technology group. Yen shared the stage with the a very tall member of the new Nexus 7700 series of data center core switches. This new family of switches have up to 83 Tbps of total switch capacity, with 1.3 Tbps of throughput on each slot. A fully populated top-of-the-line Nexus 7700 can support 384×40 Gigabit Ethernet (GbE) or 192×100 GbE ports. The switch had attendees swooning.
Holy cow!! The new Nexus 7700 looks awesome & the throughput made my jaw drop! In addition F3 line cards! Literally blowing my mind #clus
— nateb72 (@nateb72) June 24, 2013
Cisco Nexus 7700. 42tb of switching capacity? Holy hell that’s a lot of capacity. 1.3tb/slot #clus
— Ian Underwood (@iunderwood) June 24, 2013
When details of this monster switch leaked out over Twitter, I asked the Cisco PR team about it. They were tight-lipped. The official announcement of this switch is scheduled to take place during a Wednesday press conference, and Cisco isn’t prepared to offer details to the general public yet. Some content about the switch appeared on Cisco’s web site recently, but it was scrubbed when some network engineers stumbled upon it.
Given that Insieme Networks executives are scheduled to participate in the Wednesday press conference, I expect the news will be much bigger than just the Nexus 7700. My sources have told me that Insieme has been developing a massive fabric controller that can orchestrate the entire data center, not just the network. Rumors have persisted that the stealth Cisco spin-in is deeply involved in software-defined networking, too. But I’ve never been able to confirm that rumor.
I’ve heard Cisco executives boasting that this year’s Cisco Live will feature one of the biggest collections of major new product announcements in years. The Catalyst 6800 was a good start and the Nexus 7700 was impressive, but I think there’s more to come.