Continuing on a relatively rare activist track, the FCC has ordered wireless operators to establish reasonable data roaming agreements among themselves, something that is seen as increasing wireless competition and thus potentially reducing retail rates. It’s also ordering utilities to simplify access to utility poles and conduits and to price usage fairly. The problem is that both of these things are considered by some (including Republicans in the FCC) as extending beyond the FCC’s powers.
The key point with this set of arguments isn’t the issues (which we’ll get to) but rather with the fact that they both raise questions on how the FCC can operate and where it needs Congressional action. The Communications Act of 1934, amended by the Telecom Act in 1996, regulates “common carrier” activity most tightly and offers the FCC considerable power there, but much of broadband, cable, and mobile services are outside that common carrier area.
Given that there’s an increased interdependence among television broadcasting, the Internet, mobile and wireline voice, and broadband services, it’s hard to rationalize the separate “Title” regulations of the Act that differentiate rules and the power of the FCC by service type. What some of the appeals over FCC rules may end up doing is illustrating (by means of a court opinion) that we need a completely fresh communications act. A good one would help, but with Congress unable to get past dogma to even pass a budget, getting a good one seems out of the question. Continued »
An industry consolidating is an industry commoditizing at the product/service level, and that’s obviously happening in telecom. The AT&T bid for T-Mobile has now been followed by a Level 3 bid for Global Crossing; both are subject to regulatory approval, of course.
The simple reason for all of this is disintermediation, the fact that operators have become disconnected from mounting service revenues generated by advanced services while still committed to carrying traffic and supporting connectivity. If you go back to the old days of Custom Local Access Special Services (CLASS), you may recall that even in the 90s it was this group of services (which included Call Forwarding, Voicemail, etc.) that generated the highest ROI, and you may also recall that it was this class of services that gave rise to what became known as the Advanced Intelligent Network (AIN) architecture for voice services.
For the past four or five years, operators have been groping for an architecture for the Next-Generation Advanced Intelligent Network (NGAIN). We’ve had a chance to review the approaches taken by operators in three major global regions, and there’s a significant amount of commonality in the ideas presented.
What’s also interesting is that all of the players seem to be basing their next-generation plans on cloud computing principles and software technology frameworks rather than on network equipment. IBM and Microsoft seem to be more an inspiration than Alcatel-Lucent or Cisco or Juniper.
In a related move, Verizon’s Digital Media Services group announced its bid for an in-house NGAIN-like approach focused on digital video. The company wants to offer everything from transcoding and content delivery network management to rights management and (of course) wireline and mobile multi-screen delivery. By grabbing all of these elements, Verizon raises the bar for competitors like (you guessed it) Level 3, that must now capitalize its own comparable approaches. The question that Verizon isn’t answering is whether the elements of its media architecture will be componentized and flexible enough to serve related applications that need information coding, rights management, location-based services and more.
Let’s look at Cisco. The company has been a major disappointment to Wall Street for years now, and there’s been ongoing grumbling about a Cisco breakup or drastic changes in style and management. That now may be coming to a head as CEO Chambers promises a make-over, admitting that the company’s “execution” has fallen short. Interesting comment, given that any failure can be said to be an execution problem. What will happen to Cisco will depend on how it defines “execution”.
Make no mistake about it; Cisco cannot go back to being a switch/router company. Price-per-bit trends make it clear that telecom switching, routing and access equipment is going to be under tremendous price pressure, creating a market that not only Cisco can’t win in, likely nobody really can. To say that Cisco should divest its consumer division and get back to basics is to say that it should lay down with a rose on its chest and await the inevitable. Chambers has the germ of the right idea with “adjacencies”. The problem lies in the way that notion is acted upon.
Two developments in the market reflect the fact that there’s something more than moving bits going on; Alcatel-Lucent has announced an OpenTouch middleware package for UC/UCC and Verivue has announced a new content delivery framework. UC/UCC isn’t a big market-maker in the telecom space and certainly won’t make Alcatel-Lucent rich, but the fact that Alcatel-Lucent thinks UC middleware is important may mean it realizes (finally) that middleware overall is important. Yes, I know that its Open API program and developer stuff seems to demonstrate a middleware commitment, but the problem is that the underlying platform for developing service-layer assets is only implied by that activity and not revealed. Maybe now they’ll reveal it.
Verivue’s announcement is more directly aimed at the service layer. CDNs are increasingly important not because they’re a good business (Wall Street is increasingly down on all the independent CDN players), but because some CDN elements are essential in an ISP content monetization strategy. What makes Verivue interesting to me is that its CDN platform is based on virtualization, which makes it cloud-compatible. Service providers and ISPs of all types tell me that their content monetization strategies have to be based on cloud technology, component re-use, and a higher-level understanding of how content distribution fits as a part of a general service-layer architecture. Verivue can answer those questions, I think. However, the cloud element of its capability isn’t the keystone of its positioning. That likely reflects the “Cisco problem”; nobody wants to be strategic when that means embarking on a longer selling cycle.
Consolidation in the vendor space is inevitable, for the same reasons that it’s happening already in the carrier space. Unless vendors step up to the reality that systemic, strategic, complicated changes are needed to create a new revenue model for operators, their fate is sealed at all levels.
With a lot of interesting and potentially pivotal happenings in tech, perhaps the most interesting thing is that the real meanings are more important than the surface topics. AT&T wants to buy T-Mobile, which is no surprise given that DT has been looking for years at selling its U.S. property and that AT&T wants to own the world in the mobile space. What is a surprise is that Verizon seems resigned to the deal getting regulatory approval, which suggests it’s not actively lobbying against it. So why would Verizon consent (or at least acquiesce) to its biggest competitor getting even bigger? That’s one of those below-the-surface points; two, in fact.
One guess at the biggest reason is that T-Mobile and Sprint are the two players most likely to consider pushing the FCC to create reasonable, open, broadband roaming requirements. All the smaller cellular operators would like that, but only two have deep enough pockets to fund a campaign, and clearly T-Mobile is one of them. Remember that the appeals of FCC orders on unbundling were funded by the IXCs until the RBOCs bought them? The same dynamic could be at work here.
As a related point, healthy competition is key to regulators in the U.S. wireless market. It may be that Verizon reasons that neither Sprint nor T-Mobile can make an effective go of competing. Problems with the financial position of the second-tier players could then spur regulators to act. Obviously T-Mobile’s stability and endurance wouldn’t be in question if AT&T bought them. Could Sprint then capitalize on the new dynamic? Would a cable company, or even Verizon, buy them? Could the FCC accept a duopoly instead of its cherished notion of three key players? Continued »
The service layer, which means cloud-to-network binding for both enterprises and service providers, is the sweet spot of the future market. Own it and you can hope to pull through your solutions en masse. It’s still open territory.
There may be cloud architecture competition emerging from new quarters. F5 announced it had worked with IBM to develop a reference architecture for the cloud. The architecture clearly covers the creation of private clouds based on virtualization, and F5 promises that it will be extended to envelope public cloud components to hybridize them with private clouds. We see no reason why the architecture (which looks much like Eucalyptus, and that’s no accident according to F5) can’t be used for public cloud applications, including service provider clouds.
IBM has specific aspirations in the service provider space, and the reference architecture may be a step in helping prospective service providers clients build cloud services that can then easily hybridize with enterprises. It seems to us that the approach would also support SOA applications, but that’s not a specific part of the release.
HP’s CEO is promising a more cloud-engaged HP, which seems a smart move given that it’s clear that cloud computing will in some way be the driver of virtually all data-center-centralized IT consumed in the next five years.
I’m not suggesting everything migrates to public clouds with Google hosting banking applications or something. What’s going to happen is a gradual hybridization of private and public cloud architectures. That means any company with server aspirations better get on board, and HP has had a good set of tools and no blueprints. Of course, the same can be said for most network vendors, even about Oracle. Microsoft and IBM get top marks for having a cloud strategy for both enterprises and service providers, and Cisco still wins among the network vendors.
XO is getting into the cloud services business too, hardly the first carrier to do that, and it says it will be focusing on the SMB space. That’s a much more ticklish proposition than most are willing to admit. My surveys show that SMBs are likely to cloud source a larger portion of their total IT spending (compared to large enterprises). The problems are: 1) Total SMB IT spending is smaller than that of large enterprises, and 2) the cost of sales and support for SMB customers is much higher
My model says that companies like XO can sell services to their current base, but it will be difficult for them to expand beyond that. With a relatively small target audience, it’s then a question of whether XO can gain enough economy of scale to be an effective cloud player.
XO’s situation is reflective of the cloud market overall — you either are a big player or you’re an inefficient and therefore marginal player.
Juniper provided analysts, including me, with some more color on their QFabric security integration. There are two dimensions to this. First it illustrates in some detail what a “service” is in QFabric terms, and second it illustrates the exploding complexity of security in a virtualization and cloud environment. The combination of these things positions QFabric much more directly as an element in a cloud computing strategy, which is how I’ve viewed it from the first. Since I think anything truly cloud-enabling is important, we’ll look at this a bit here.
In the QFabric pitch, Juniper’s CTO made a point of saying that Juniper QFabric offered both virtual networks and “services” as its properties. Security is considered a service, and the way a service is created is by routing a flow through a service “engine”. Because the flow’s route explicitly intersects with a security process, the flow route is secure without external intervention. You can see that this sort of engine/service thing could easily be applied to other things like application acceleration, although Juniper hasn’t said anything about what might be a QFabric service beyond security. Interestingly, by making security appliances into QFabric service engines, you effectively virtualize security and increase the number of machines you can secure with a given device. It’s like the old test of a good compiler; can it compile itself? An offering to secure virtualized resources has to be a virtualized resource. Continued »
Politicking over the net neutrality rules continues, with the House holding a hearing on the matter. It’s pretty hard for the House to overturn an FCC order without enacting legislation, and that’s not going to pass the Senate or a Presidential veto, so the whole thing is clearly an exercise. The real test for the net neutrality order will come in the courts, and it’s virtually impossible to say how long that might take to work through. But the debate shows the depth of idiocy associated with the whole process.
Meanwhile new developments in the market continue to raise the stakes for operators. The FCC’s position can be reduced to this: “Consumers lack enough choice in broadband providers to allow them to vote against site blocking with their feet”. True, but that’s not really the part of the order that most people object to. You can simply say “no blocking of sites or discrimination in traffic handling based either on the site or the traffic type” and be done with it.
The FCC didn’t do that. Instead it took excursions into things like whether offering premium handling was tantamount to a kind of blocking-one-by-exalting-another relativism. Even the question of whether a type of traffic could be blocked is kind of moot, in my view. As long as operators didn’t have the ability to apply different rules in different geographies, providers would face immediate competitive disaster if they were to impose unusual handling restrictions. But the real problem is whether the FCC has any authority at all in this matter, and that’s what the courts will decide. Continued »
We’re counting down to the launch of the RIM “Playbook” tablet and wondering how competitors will manage the new iPad 2 in their plans for the fall. The challenge for them all at this point is the sense that there’s still got to be another generation of Android tablets to catch up, which means that the current generation may be obsolete even before it’s released. Not only does that hurt sales, it could discredit a complete product line by stomping on its launch and limiting early interest and market share. It’s the first announcement that gets the most ink.
Enterprises are also starting to work through the issues of tablet-based collaboration, and interestingly that’s one of the things RIM is expected to try to exploit. A tablet is most valuable as a collaborative tool for “corridor warriors,” in what my research identified as supervisory intervention applications rather than team activities.
In supervisory collaboration, a worker seeks approval or answers on a particular issue, an issue normally represented as a document or an application screen. The process demands that the supervisory/support person share the document/application context and simultaneously discuss the problem. Thus you need voice and data together.
Some tablet vendors and media types have suggested that video collaboration is the answer—tablets have the cameras after all. The problem is that video takes a lot of capacity, people don’t like random video calls that intrude on their current context, and there’s no evidence that video helps paired relationships be more productive.
Voice is the answer, but how exactly do we use collaborative voice with tablets? RIM’s answer is likely to be by creating a tight link between the tablet and a Blackberry, and that may be a good approach. We’ve noted this issue in some enterprise comments on the difference between iPhones on AT&T and the same phone on Verizon; the collaborative multi-tasking support is better on the first than on the second.
Image counts in every way and at every level of purchase decision-making. And Huawei is one vendor that knows that better than most. From the first, Huawei has been tarred with its association with China at multiple levels – first as a poster child for the “cheap Asian economics” story, but often behind the scenes as a sinister agent of communism.
Huawei’s failure to complete the intellectual property acquisition of 3Leaf Systems was apparently the last straw, and Huawei issued an unprecedented open letter to U.S. officials and in parallel to the U.S. market. “We’re not your enemy” was the sense of the letter, and while there’s no question the message is self-serving and inaccurate at the economic level, it’s true at the political level, in my view.
With China, there seems to be a combination of cultural and economic xenophobia that taints our perception of the country. Huawei knows that, and it’s asking us to re-examine our motives. Personally, I think everyone needs to go through that exercise, but whether you do yourself is your issue. Here, I want to focus on the industry import of the move.
Huawei definitely needs to succeed in the U.S. market. U.S. (and other major national) vendors would like Huawei to fail, because as a price leader, Huawei is destructive to their margins in the near term and their market share in the longer term. The open letter is a signal that Huawei is going to address the points of resistance to its success and that it intends to make a more aggressive move in the U.S. That has major implications/consequences in the market because the U.S. is a proving ground for so many networking innovations. Continued »