The Network Hub


August 11, 2014  6:01 PM

Venture capital firm back in data center market: Why?

Chuck Moozakis Chuck Moozakis Profile: Chuck Moozakis
APM

By Chuck Moozakis

Private equity investment firm M/C Partners has a long pedigree in telecommunications investing. The Boston-based firm, formerly known as Media Communications Partners, has overseen more than $1.5 billion in placements over the past two decades as it focused on companies spawned from the landmark Telecommunications Deregulation Act of 1996.

So, when Managing Partner Gillis Cashman talks about the firm’s latest investment–$50 million equity funding in data center services company Involta—his thoughts bear at least a cursory listen. Why, after previously investing in heavyweight companies that included Metro PCS and Level 3 Communications, does M/C now believe Involta, with just a handful of data centers in towns such as Duluth, Minn., and Marion, Iowa, is a good bet?

The simple answer? Application performance. Or more specifically, the lack thereof.

“There is a view in cloud computing that data centers are now becoming commodities and that proximity doesn’t matter; you can host your servers anywhere,” Cashman said. But concerns about application performance, and to a lesser extent security, are inhibiting cloud’s success, he said. “When you think about application performance, it really requires a different architecture, where you need to get those servers and applications very close to the end user.

“Instead of 50 servers being in a data center in the middle of nowhere, now what you need is 50 servers at 50 data centers close to the edge where the redundancy is in the network itself.”

And, Cashman said, those DCs should be located where the need is greatest: to serve enterprises in communities that are not served by Tier 1 or Tier 2 providers. These companies, he said, still have mission-critical applications, but they can’t get the service-level agreements they need to ensure their employees and customers are getting the application performance they deserve.

“There is far more insourcing going on in smaller markets,” Cashman said. “The reason is they either don’t trust the facilities in the market or there are no facilities in the market, so they are forced to deploy their applications internally.” To target these types of customers, Involta builds a dedicated fiber link from the DC to the enterprise, effectively creating a leased line. “Performance across this network is guaranteed because it never touches the public Internet, and to me, that is a critical factor that will drive more outsourcing [to data centers]. You need to have that infrastructure in place to effectively place these private cloud architectures.”

Ensuring that Acme Manufacturing in central Iowa has the same broadband capability and application performance as XYZ MegaCorp. in New York City is smart business–and as M/C Partners almost surely agrees, it’s good business, too.

August 8, 2014  5:04 PM

Smaller vs larger MPLS service providers?

Robert Sturt Robert Sturt Profile: Robert Sturt
Network

I recently read a couple of good article UK MPLS & Global MPLS which got me to thinking about the differences between large and small service providers and how they serve the market differently.

The default for the Enterprise is to typically progress their MPLS proposal with the larger end of the market which is understandable. An Enterprise requires the stability of a service provider of equal stature in terms of size to provide comfort in stability. On the flip side, smaller organisations (think SME) are always avoiding the larger service provider in favour of the agility and focus which smaller providers typically offer.

I personally worked for a large service provider in the mid 2000’s and recall a strategy change where the CEO decided to effectively segment their business. In short, the provider decided they were expending way too much of their employees time supporting SME businesses which represented a fraction of their revenue. As a business decision, it was probably the right one to make but I imagine the SME’s being given the news that they were effectively being forced into a different support channel were not impressed. Within the same provider, they also launched a new program of professional services where the large enterprise would be expected to pay for service and project management – i.e. these resources were no longer being provided by default. I’m not judging their decision and in many ways the service and support increased for their Enterprise clients which probably had the budget.

The smaller SME therefore should be wary of entering into contracts with the larger providers since they may not achieve the focus and service of the larger paying clients. I appreciate this is a broad statement to make and larger service providers are making strides into changing how they support the SME market. An an example, BT have launched a specific product which is dedicated to the SME market but the release is early days so we will have to see how things pan out.

Let’s look at some of the comparisons.

Clearly larger service providers have huge revenue streams which offers stability associated with similar institutions to themselves. This said, profitability is still very important as we have witnessed large providers such as WorldCom enter Chapter 11 so size is not always a given from the perspective of stability. However, all things being equal, a large stable company provides long term comfort when signing WAN contracts. The smaller providers are often good profitable organisations but they very much have a shorter way to fall if things should go wrong. We know of companies which are reliant on a  few contracts for the source of their income and profitability which clearly is a risk. And there are some which have a good broad range of contracts so are more stable and further along their business growth path. It is also true that smaller providers are more prone to strategy changes. In any given month, they may decide to invest which changes their financial position and increases risk.

Staff and coverage is also an area which requires clarification. Using another example, a provider we worked with under a consultancy arrangement had only two main POP’s (Point of Presence) in the UK with only a few staff. We asked how they would support offices over large distances and they said “We would put replacement hardware in a van and ask one of the engineers to drive it over”. Whilst this approach may work, it’s clearly not a particularly robust support process.

The coverage of a provider is very variable with smaller providers. Our experience ranges from companies with hardware in an office (yes really) through to a couple of core POP’s up to well engineered networks. I always recommend IT Management looking at procurement to clearly understand the true MPLS coverage of services providers.

Over and above coverage, process for adds, moves and changes very much varies when comparing the larger organisations vs the smaller companies in the market place. In my experience, smaller represents agility with larger service providers often creating more bureaucracy.


August 6, 2014  10:30 AM

Outsourced MPLS or in-house management?

Robert Sturt Robert Sturt Profile: Robert Sturt
Network

Over the years there has been on constant split between UK based organisations vs their US counterparts. The US appear to procure their MPLS capability as wires only, i.e. self managed vs the UK’s tendency to outsource.

We are witnessing companies such as BT experience more traction with self managed WAN products such as IP Connect unmanaged which allows clients to procure network connectivity and not management. Note: For UK readers, IP Connect replaces the IP Clear product.

As a rule, service providers are traditionally a little cumbersome to deal with when making adds, moves and changes. in many ways, the lack of agility when making changes is perhaps one of the main reasons why there is so much churn in the industry. In our past life, we assisted organisations with WAN procurement which provided an insight into why businesses were looking to change service provider. One of the main reasons? Making changes to the network took way too long, the change was often incorrect and the documentation reflecting the updated network was poor.

With the above in mind, I would have expected to see less churn in the US market simply because businesses are in a position to make their own changes to the network. I believe this tells us that the frustration with service provider agility is simply one reason why an organisation procures a brand new WAN and the sum reflects a number of issues and problems.

Outsourcing

When outsourcing to a service provider, you are effectively reliant on the provider to configure and maintain the capability of your edge routers (and switches potentially). There is an obvious benefit here since outsourcing allows your IT team to focus on other areas of the business rather than the WAN. However, the negative occurs when the provider simply does not act quickly enough and / or misunderstands requirements resulting in incorrect delivery of the change. In my experience of working with and for large service providers, we’ve seen the simplest of changes take weeks which has a profoundly negative effect on the business. When these kind of delays occur, the WAN becomes a bottleneck rather than an enabler. In 2014, you would also expect to see innovations in the field of change requests and for sure we are seeing some improvements. However, the majority of providers are very much still reliant on the same bureaucratic processes. This said, we have seen some real innovations with portals now providing an easy to access method of requesting changes with real time updates as progress is made. These portals generally only support a ‘base’ level of changes though and the more complex or un-productised changes still very much require a manual process.

In-house

The clients which adopt an in-house managed service approach are well positioned to make changes as and when required. However, any changes which require the provider to also alter the configuration on their network may still result in similar delays to those experienced with outsourced capability. In general though, straight forward changes are completed quickly as and when required which provides an agility which the service providers generally cannot meet. The negatives revolve around having to ensure your IT staff are well positioned to troubleshoot, configure and maintain the required due diligence required with a corporate network. As MPLS is a provider network protocol (and not deployed on your edge router), the actual configuration required is relatively simple.  Self managed networks are more often than not fully monitored by the service provider which is key to leveraging their knowledge and insight into the end to end connectivity.

The last point to make is that outsourcing does not have to be provided by the service provider. There are many opportunities to leverage the ability of niche managed IT companies to look at providing an alternative. As cloud services are becoming more prevalent, IT companies are leveraging their capability to provide a ‘one stop shop’ for managed services. I believe we will see this occur more as we move forward.


August 5, 2014  12:32 PM

IT pros know app performance affects business performance, but what are they doing about it?

Gina Narcisi Gina Narcisi Profile: Gina Narcisi
APM, APM tools, enterprise WAN, Riverbed, WAN optimization

Riverbed Technology conducted a short survey of Interop attendees at its booth during the Las Vegas conference in April, but the findings didn’t stay in Vegas. The survey of IT professionals revealed that IT knows that poor performance by mission-critical applications can hurt their businesses, but too few of them are doing anything about it, according to Riverbed.

Riverbed asked 210 Interop attendees about their awareness of the technologies that exist to mitigate app performance problems, and if their company was taking proactive steps to avoid application issues, said Steve Riley, Technical Director for the CTO’s office at Riverbed.The top three causes of performance problems, according to Riverbed’s survey, are insufficient bandwidth, too much latency, and slow servers. But a gap remains between those who are aware of the problem, and those who are actually implementing solutions, Riley said. In fact, the results found that 80% of respondents know that slow business-critical applications can have a moderate to extreme impact on overall business performance, but only 50% of then were actually doing something to solve application performance issues.

While budget constraints are undeniable, Riverbed believes that overall willingness to adopt application delivery and performance management technology that can help is “missing to a certain degree,” he said.

“Seventy percent of respondents said, ‘let’s throw more bandwidth at the problem and see if it goes away, but only 50% of businesses actually have bought more bandwidth,” he said. And 67% of respondents believe that WAN optimization solutions are a good way to combat application performance issues, but only 42% of businesses were using WAN optimization tools. Finally, 52% of respondents believe that they can solve some performance problems by geographically distributing application workloads, but just 28% of businesses have followed through with that method.

So what can businesses with budget constraints do to mitigate application performance problems? First, Riverbed suggests that businesses evaluate their mission-critical apps before buying a bunch of bandwidth. Some applications won’t simply begin performing better with more bandwidth because latency and jitter could still be an issue, Riley said.

“Don’t just guess at what your performance problems are,” he said. “We really suggest that people analyze and diagnose their application problems first.”

Businesses should also think about geographic distribution, Riley said. “Keep in mind that it does take time for applications located in one place to move to another place. With major cloud providers scattered all over the world, multiple instances of workloads is becoming an easier thing for people to do, so [users] can access that data anywhere.”

Last, but certainly not least in the opinion of a vendor focused on IT performance, businesses should also put WAN optimization tools where they make sense. “The whole purpose of that technology is mitigating latency, and allowing end users to access applications as if they are local, even if they are on the other side of the world,” Riley said.

 


June 5, 2014  3:14 PM

Would you buy a soft Junos switch or router from Juniper?

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy
Big Switch Networks, Cisco, Dell, vSwitch

What’s the term for when you’re just throwing things out there to see what works? Spitballing? I’m just spitballing here. I think Juniper Networks should start selling Junos, the operating system for its switches and routers, as a software product.

I believe in branding. If a company has a brand that users like, use that brand. Everyone loves M&Ms and lots of people love Skittles, but you never see an advertisement for their parent company, Mars Inc. Remember when BlackBerrys were a big deal? Research In Motion (RIM) rarely, if ever, tried to market new products under the RIM brand. Heck, the company changed its name to BlackBerry eventually.

Juniper has Junos. It is a great brand. It’s a great piece of technology. Every Juniper customer whom I’ve talked to loves Junos. Sometimes they don’t completely love the hardware Junos runs on, but they always love Junos. To paraphrase one example from a customer: “The Virtual Chassis technology on this particular model of Juniper EX switch is kind of a pain in the neck to work with, but darn it, I love Junos!”

Juniper knows it has a good thing going with this brand. Take a look at its relatively new network management software brand, Junos Space. When Cisco re-branded its network management software, it chose Cisco Prime. It didn’t bother re-purposing NX-OS, the operating system for its Nexus data center switches, as a management software brand. NX-OS Prime? Too many Cisco customers still grouse about the instability they dealt with in early NX-OS code releases back in 2009 and 2010. But customers love Junos, so Juniper extended the brand.

I know what you’re thinking. This is the switch and router industry. These companies don’t sell software. They sell boxes. Big vertically integrated systems with high profit margins that (sometimes) keep shareholders happy. But things change. Cisco built the Nexus 1000v, which looks and feels like a Nexus switch, but serves as a distributed virtual switch on a hypervisor host. Everyone is at least testing a product with Open Virtual Switch (OVS) software in it. Cumulus Networks, Big Switch Networks and Pica8 are all building business around switch software that can run on white box or bare-metal switch hardware.

Why not throw Junos into the mix? How many Juniper customers would like the chance to run a Junos router on an x86 server or regain control over the virtualized access layer of their data center by Junos running on hypervisors? How many cloud providers or Web content providers would try out Junos as an OS for bare-metal switches? I have no idea, but I find the concept interesting.

It will probably never happen, because… shareholders. Cisco would never do this and neither can Juniper. Wall Street wants profit margins. Is it a complete coincidence that after Dell went private, it started selling bare-metal versions of its data center switches with support for Cumulus and Big Switch operating systems?

I’m not so naïve as to think Juniper could just rip Junos out of its system stack and release it as a product. It would probably take a lot of time and money to do such a thing. I might even be dead wrong about releasing Junos as a software product. Maybe people don’t want it in that form. I just think it’s an interesting idea. Last year Juniper memorably announced an enterprise software licensing program, Juniper Software Advantage, but offered absolutely no software products within the licensing regime at the time of the announcement. It was a perplexing move, but months later it moved its security software products into the program. I think people would have really been excited to see Junos available through that licensing scheme. Like I said, I’m just spitballing.


April 14, 2014  10:36 AM

Unlike Elvis, IPv6 is in the building

Chuck Moozakis Chuck Moozakis Profile: Chuck Moozakis

As the large group of IT administrators and engineers filed into the IPv6 session at this year’s Interop Las Vegas, they probably were expecting to hear yet another boring lecture about the importance of upgrading from IPv4.

After all, it’s been 15 years since IPv6 was ratified. And for proponents of the specification, it must seem it will be another 15 before it gets adopted in the majority of enterprise and service provider networks.

These Interop attendees already knew the benefits of IPv6: it is a more efficient transmission method; it supports more robust security standards. And they are well aware that IPv4 is running out of addresses. IPv6, with its 128-bit addressing scheme, is the only logical solution, backers say.

But those reasons alone hadn’t motivated most of the engineers sitting in the Mandalay Bay conference room in Las Vegas to make the move. Despite the benefits of IPv6, making the necessary infrastructure changes to adopt the new standard takes work, time and money. And for most organizations, those three ingredients are always in short supply.

So it was up to speaker Edward Horley to persuade these professionals why it was time to move to IPv6. And he had a very convincing argument: You already have IPv6 in your networks, he said, and if you haven’t properly planned for it, prepare for trouble.

And how did IPv6 sneak in the networks overseen by these engineers, the vast majority of whom said they weren’t running the new protocol? From simple upgrades, Horley said. Upgrades or deployments of such common operating systems as Microsoft Windows Vista, 7, 8 or Server 2008, for example, are all grounded in IPv6, as is Server 2008R2, 2012 and 2012R2.

“You’ve already deployed it. It’s already there, and you better know what it is doing in your network,” said Horley, principal solutions architect at Campbell, Calif.-based Groupware Technology and a long-time IPv6 evangelist. “This is one of the things people don’t understand. It’s on by default. This is your domain of responsibility, and it’s your job to understand it.”

What all of this means, Horley said, is that network administrators better learn, and learn quickly, how IPv6 will affect their operations. “I am here to tell you that you did deploy IPv6, you didn’t do it in an educated way and you need to understand the impact,” he said. “It’s alarming: The vendors did this and they didn’t tell you. Well, it’s easier to adopt and support IPv6 than to run away or ignore it.”

Horley didn’t sugarcoat the challenges administrators face. The transition, he said, “will be ugly for everyone because we’ve been kicking the can for 10 years.” As a result, carriers have resorted to tactics such as carrier grade network address transition (CGN) to mitigate the exhaustion in IPv4 addresses. But there are serious shortcomings to that technique, especially for websites such as Google Maps that require hundreds of sessions to complete. And dual-stack, a tool that permits the support of both protocols, doesn’t hold a long-term answer either, Horley said.

“Six solves these problems,” he said.

But wait. As the informercial announcers like to say, there’s more.

Further delay in migrating to IPv6 will also begin to seriously impact such elemental services as VPNs, VoIP and Session Initiation Protocol-based operations, Horley said.

Bottom line? “You don’t have to start tomorrow, but you do have to start thinking about it,” Horley said, adding yet another reason why administrators can’t wait: the end of Windows XP support. As that popular OS gets phased out at enterprises, three guesses at what’s waiting in the wings.


January 24, 2014  5:37 PM

Threat landscape still rocky, but tools can help

Chuck Moozakis Chuck Moozakis Profile: Chuck Moozakis

If it makes you feel any better, organizations spent more than $12 billion on firewall, intrusion prevention, endpoint protection and secure Web gateway products last year. That’s just a drop in the tens of billions of dollars enterprises spent overall in the past 12 months to protect their digital assets.
Alas, it’s not nearly enough–as recent data breaches at Target and Neiman Marcus have illustrated.
And the best (that is, worst) is yet to come.
“I really think we are looking at some new aspects” in malware and enterprise vulnerabilities, said Gartner Research Director Eric Ahlm at a McAfee data protection webinar held in mid-January. “There is a change in the threat landscape.”
Among the changes: User-based attacks are becoming easier and targeted attacks have become much more intelligent.
“Being able to prevent is much more of a challenge,” Ahlm said.
At the same time, hackers have a well-oiled ecosystem, whether they are organized state agents or solitary data thieves who can easily tap into a willing market in which to sell their stolen information.
But wait. There’s more: The continued growth of mobile devices is bringing with it some especially sobering security trends, according to Gartner, including the following:
–By 2018, 25% of corporate data (compared with 4% today) will bypass perimeter security and flow directly from mobile devices to the cloud.
–Through 2017, 75% of mobile security breaches will be a result of mobile application misconfigurations.
“If we’ve lost our control plane and lost our visibility plane, it’s going to make [asset protection] much more challenging,” Ahlm said.
That said, not all is gloom and doom. Adaptive, rather than preventive, security will become an important weapon in enterprise security arsenals.
“We need to be able to find compromised systems and know what methods we have to find these systems,” Ahlm said, adding that a security strategy anchored by situational and contextual awareness platforms will be critical.
“Security teams need to hunt and they need to look. Knowing what’s involved and what’s in play will be vital in building programs that succeed.”
Other advice:
–Use network analysis in conjunction with global threat intelligence feeds to determine if a system is under a hacker’s control.
–Correlate internal information such as network logs, network behaviors, host behaviors and user importance. That situational awareness can help organizations prioritize and triage in the wake of a data breach, Ahlm said.


August 9, 2013  11:15 AM

Making the move from IT to ET

Chuck Moozakis Chuck Moozakis Profile: Chuck Moozakis

Say bye to information technology. Say hello to enterprise technology.

So says Nemertes Research President Johna Till Johnson. In a discussion highlighting Nemertes’ 2013-2014 Enterprise Technology Benchmark study earlier this summer, Johnson said the shift from IT to ET will be no less dramatic than the transition from MIS to IT 30 years ago. It’s a swing, she said, that will have a big impact on IT professionals.

“In a nutshell, what we are seeing is that IT is now being asked to be a trusted adviser to drive the business,” she said. “IT practitioners are now being asked to move into an enterprise technology role,” supporting and guiding the entire business.

Fueling the shift: the rise of the remote worker, untethered from the office and free from the physical network. Employees, Johnson said, “are not at their desks; they are out serving customers, taking orders.” The result: Instead of networking knowledge workers, administrators today must network the broader enterprise.

Fortunately for IT executives, COOs and CEOs appear to be actively soliciting their advice on how ET can be made a reality. Nemertes’ research found that 73% of CIOs responding to their survey have been asked to participate in an ET transformation project. Only 13% of CIOs gave the same answer in Nemertes’ 2012 survey. That’s approximately the same result Cisco found in its 2013 Global IT Impact Survey, which noted that a nine out of 10 IT execs collaborate with corporate brass at least on a monthly basis to coordinate strategic initiatives.

That’s the good news. The challenge: Becoming ET-savvy won’t come without a hitch. Where IT is all about getting information transmitted from point “A” to point “B,” ET is understanding how that conveyance helps the organization innovate its operations. Or, as Johnson described it, “IT is about getting the trains to run on time; innovation is about disrupting the existing process, so we are seeing the concrete impact of the innovator’s dilemma”—where companies risk their own survival by failing to adopt technologies or strategies that will meet their customers’ future needs. Orchestrating that shift successfully “will have a huge impact for us who work in the tech field,” she said.

So get ready for ET. It will be here sooner than you might think.


June 24, 2013  11:21 PM

Cisco’s Nexus 7700 makes jaws drop at Cisco Live

Shamus McGillicuddy Shamus McGillicuddy Profile: Shamus McGillicuddy

On Monday Cisco’s carefully curated press program at Cisco Live Orlando focused on the refresh of the company’s campus switching portfolio, most notably with the new Catalyst 6800 series of chassis switches whose supervisor modules and line cards are backward and forward-compilable with the Catalyst 6500.

Meanwhile, Cisco Live attendees got a special treat during a keynote presentation by David Yen, vice president and general manager of Cisco’s data center technology group. Yen shared the stage with the a very tall member of the new Nexus 7700 series of data center core switches. This new family of switches have up to 83 Tbps of total switch capacity, with 1.3 Tbps of throughput on each slot. A fully populated top-of-the-line Nexus 7700 can support 384×40 Gigabit Ethernet (GbE) or 192×100 GbE ports. The switch had attendees swooning.

When details of this monster switch leaked out over Twitter, I asked the Cisco PR team about it. They were tight-lipped. The official announcement of this switch is scheduled to take place during a Wednesday press conference, and Cisco isn’t prepared to offer details to the general public yet. Some content about the switch appeared on Cisco’s web site recently, but it was scrubbed when some network engineers stumbled upon it.

Given that Insieme Networks executives are scheduled to participate in the Wednesday press conference, I expect the news will be much bigger than just the Nexus 7700. My sources have told me that Insieme has been developing a massive fabric controller that can orchestrate the entire data center, not just the network. Rumors have persisted that the stealth Cisco spin-in is deeply involved in software-defined networking, too. But I’ve never been able to confirm that rumor.

I’ve heard Cisco executives boasting that this year’s Cisco Live will feature one of the biggest collections of major new product announcements in years. The Catalyst 6800 was a good start and the Nexus 7700 was impressive, but I think there’s more to come.


April 24, 2013  11:13 AM

Curran: ‘Not on Internet’ unless on IPv6

Chuck Moozakis Chuck Moozakis Profile: Chuck Moozakis
John Curran, ARIN CEO

John Curran, ARIN CEO & President

John Curran, the voluble president and CEO of the American Registry for Internet Numbers, didn’t waste any time exhorting attendees at last week’s North American IPv6 Summit in Denver to break the logjam delaying the widespread implementation of the next-generation protocol.

“Why are we doing this?” he asked. “What is the one event” that will spark the momentum needed to fuel IPv6’s adoption?

Curran said the energy sparked by the realization that IPv4 addresses would soon disappear had sputtered over the past year as enterprises and ISPs found other ways to manage device identification and addressing.

“ISPs say customers aren’t asking for [IPv6], and you can’t expect ISPs to deploy when customers aren’t asking for it,” he said. And why aren’t customers demanding their providers switch to IPv6?

“Because they believe they are already connected to the Internet. We must disabuse them of that notion.”

To Curran, IPv4 is not the Internet, users’ claims notwithstanding.

“You are connected to a subset of the Internet,” he said of those users that believe they’re fully Web-enabled.

“We have to begin to tell customers they are not on the Internet; they are paying to be on, but they need to be told they are not on the Internet unless they are on IPv6,” Curran said.

To be sure, Curran and ARIN have a vested interest in encouraging enterprises and ISPs to adopt IPv6. North American IPv4 addresses will be exhausted by 2015, and with millions of user devices and other Internet-aware gadgets slated to come on-stream in the next few years, IPv6 is the only alternative. But IPv6 adoption, at least in the United States, has been glacial. While a little more than a third of U.S. government websites are IPv6 enabled, only 3.7% of industry websites and 5.7% of educational websites are similarly supported, according to stats shared at the Summit.

And that doesn’t say anything about how few enterprises’ internal networks natively support the IPv6 protocol.

For better or worse, many ISP and enterprise executives remain reluctant to invest the time, money and resources necessary to migrate to IPv6. The protocol’s proponents, and there are many, understand this. But they also understand how critical it is for companies and carriers to embrace IPv6.

Not because it can handle as many addresses as there are grains of sand, as the saying goes. But because it will also usher in new services and new capabilities that U.S. businesses and ISPs will need to remain competitive in the years and decades to come.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: