Cisco held its investor meeting yesterday, and the Street view of the event was considerably more favorable than its view of rival Juniper. Analysts appear to agree when Cisco’s Chambers says that Juniper is “vulnerable”. My readers know that I wrote about this Cisco view some time back. Cisco says this is because Juniper is spread too thin between service provider and enterprise. Along the way, Chambers indicated that Cisco would be taking an edgier position against HP and acknowledged that Huawei is a strong competitor.
So what’s really going on here? Is Cisco going to give us the networking equivalent of the negative ad campaign, and if so, why? Negative ads are designed not to influence voters but to disgust them, to keep the dangerous unaligned out of the polling places so your party hacks matter more. Keeping the buyer out of the market is exactly what Cisco is risking here. And why take that risk? It could be because Cisco is hoping to make something happen, and it wants Juniper in particular to be on the defensive.
Juniper is a habitual counter-puncher; it has allowed Cisco to set the stage time after time and then boxed against the Cisco initiatives. The fact that Cisco thinks Juniper is spread too thin almost invites everyone to believe that Juniper might “un-spread” itself. But that could only come by reducing its enterprise commitments—Juniper depends too much on the service provider side. If that’s what Cisco wants, then it wants it because Juniper’s upcoming QFabric might be harmful to Cisco’s data center and content strategy.
If Juniper responds to Cisco as usual, by simply rebutting the comments Cisco makes, then Cisco will control the market dialog. If you want Juniper to talk about something, attack it there and it will help you drag the issue around the media circuit. If you don’t want something discussed, then just stay mum; Juniper will not raise the issue either. So for Juniper here, the right answer is to forget what Cisco is saying and focus on what Cisco doesn’t want Juniper to talk about.
We’re heading into the fall now, and with the change in season will come a new period of technology-strategy planning for both enterprises and the service providers. I’ve tracked the former group with a formal fall survey since 1982 and the latter since 1991, and the results of the surveys are always interesting. This year instead of publishing a special report in November to cover the results, I’m integrating them into our December Annual Technology Forecast issue of our technology journal, Netwatcher.
For the enterprise, the challenge with project spending has been identifying projects that provided a net benefit. Over the last 10 years, the focus of enterprise projects has shifted from providing some enhancement to the top line to one of defending the bottom line. That means shifting from a productivity-driven thesis for projects to a cost-management thesis. The problem is that cost management vanishes to a point; you can’t continually build IT spending on a static set of benefits and at the same time demand “improvements” in ROI unless you take spending levels toward zero. There is still a credible “cost” on the table, associated with the management of application performance, but it hasn’t been addressed in an organized way by the vendor community in general and by networking vendors in particular. Neither group has been able to come up with productivity-based benefits to drive spending up either. This fall we’ll see if it changes.
For network operators, monetization is obviously the big problem. Right now I’m seeing operators pretty pessimistic about wireline investment except in emerging economies. The Internet is the only wireline driver for traffic growth, and it’s a driver whose growth is currently non-monetizable under neutrality rules and the unlimited-usage paradigm. Operators have identified three priority areas(content, mobile/behavioral, and cloud) but only the latter is getting much near-term capital support because the former two rely almost totally on the emergence of a service-layer paradigm—an NGN Advanced Intelligent Network architecture. That’s not been happening, at least not in an open sense, and so I’m seeing an accelerating shift of capex to mobile networking, where dollars buy cell sites and backhaul and switches and not routers. That shift works against the network vendors overall, but in particular against those who don’t have much of an RF/cellular stance. Which is why Cisco did their agreement with NEC, of course, for monetization. Continued »
In video, we’re seeing a bit more focus on the kind of features more likely to be associated with monetized video than passive streaming. Vendors are jumping onto this, not with what we’d like to see (a true, architected, service layer) but at least with silo video offerings. I noted that Alcatel-Lucent had announced a multi-screen application, and NSN did the same this week. The NSN offering is surprisingly glitzy for a company that’s not exactly a household word in effective marketing sensationalism; it demonstrates screen-switching and social video, for example. And in fact, both NSN and Alcatel-Lucent may be moving toward a unified service-layer approach. Whether for sales focus reasons or because they’re hoping to sell professional services, or just that they’re not there yet, the current material doesn’t talk about service-layer integration and orchestration.
It’s fair to ask what this is all going to mean, and I think one thing that’s certain is that the network of the future is going to revolve around the CDN. Content is the majority of traffic growth. Content is the majority of monetization opportunity. Content that has any monetization is served by a CDN to manage QoE. Content that has none is served by a CDN to control bandwidth utilization. Getting the picture here?
But the CDN of the future isn’t the old Akamai model peering-point connection. It’s deeply distributed, it’s highly policy-managed with respect to where caches go and what goes into them, and it’s highly componentized so that it can be composed into flexible media offerings that are specific to the operators’ local needs and rules. You can see this model emerging from both Alcatel-Lucent and NSN, and also being expressed at least by Cisco and Juniper. You can even see it from CDN startups like Verivue.
The fact is that every operator is going to need both bandwidth optimization and content monetization. That these missions are very different means that CDNs and the logic that’s built around them has to be very flexible. We’re working to find out just how flexible all these options really are, and we hope to provide some of that this month in Netwatcher, and more in future issues.
The whole video thing is going to be impacted by the fall season of technology product launches, among which are supposed to be Apple’s new iPhone and Amazon’s tablet. We are seeing iPhones and Android smartphones gaining traction even as gaming platforms, and obviously we’re seeing tablets increasingly as personal media portals. There will be people who will use smartphones more (youth), people who use tablets more (everyone else), and all of these people will be both stressing networks and generating opportunity. I don’t think that the OTT video market will really threaten channelized TV because I doubt that the delivery of that much material can be made affordable to the consumer and still return anything reasonable on investment for the operator. I do think that the revenue kicker it can add to commercials embedded in standard content could be very significant, and so I think you’re going to see more from vendors to address streaming and monetization of video.
One of the more interesting wrinkles in the ongoing tablet wars is a decision by more cable companies to back away from any commitments (on their own or as MVNOs) for wireless capabilities. There was a time when everyone thought the quad play was going to be a major requirement, so how did this happen? Apple, in a word, but there’s more.
First of all, the iPhone created an appliance magnetism that broke many customers away from having cellular services from their home carriers. It disproved the notion that you could create loyalty with non-functional bundles alone, and that in itself was a major factor in limiting interest in quad-play economics.
- Second, it has proved more complicated to create FUNCTIONAL bundles, active symbiosis between wireless and wireline, than was previously considered. Yes it’s possible to create apps to let you do something on or with your TV, but for the key youth market, those tools are less interesting because they’re not home anyway. And service-layer technology, an architecture or framework that would let operators (including MSOs) build sophisticated componentized services from features, has been hard to come by.
- Third, tablets are proving that if consumers have a larger form factor and a place to sit, they will consume “TV Everywhere” . On one hand, this might appear to promote a cable company’s entry into cellular, but it doesn’t for two reasons — usage costs and hospitality hot spots. You don’t have to stream many videos to your tablet to run into extra-cost territory, and in any event, why pay for mobility when you need to sit down to watch?
Since tablet vendors offer WiFi tablets at a much lower cost than cellular-equipped models, more and more consumers are jumping on that approach, and TV Everywhere doesn’t have to include that many places that don’t offer WiFi. I think we’re going to see WiFi exploding at the same pace that tablets have exploded, and I think we’re going to see less focus on “wireless” and more on WiFi. One more reason why the DoJ should let AT&T and T-Mobile merge.
Dell has used a couple of software conferences as bully pulpits for some of its own cloud announcements. The company is making a major cloud move, one it obviously hopes will elevate them to the status of a “real” computer company (they rank number three in our surveys in terms of what users think of as a “real” player in the space, after IBM and HP, but it’s a distant third). In its effort, Dell will partner with both VMware (one conference pulpit) and Salesforce (the other) to offer Dell-branded cloud technology, but it also intends to host open-source cloud offerings (a la Hadoop, perhaps) and even Microsoft Azure.
Dell’s greatest strength has been in the SMB space, and that is also perhaps the best target for cloud services in the near term. Enterprises secure good economies of capital and support scale in their normal data center build-outs, and it’s hard for public cloud services to compete. For the SMB, neither capital nor support economies are easily established, but the latter in particular is problematic because SMBs often can’t attract skilled IT technicians. Remember that Dell also has a professional services arm now, and that means its own support skills likely have a lower marginal cost — all to make their price potentially more attractive.
VMware, meanwhile, is advancing its own cloud position with a Data Director designed to create an enterprise DBaaS model that would also in my view facilitate cloud models where the application or its components ran in the cloud and the data stayed in the enterprise’s own repositories. This would help considerably in building a larger cloud TAM because it dodges the thorny problem of cloud data pricing and security.
In another initiative, VMware has joined with Arista, Broadcom, Cisco, and Emulex to create what they call the “Virtual Extensible LAN” or VXLAN. This is a strategy to add a 24-bit header to a VLAN packet and then encapsulate the whole thing in IP. It would allow the creation of more VLANs with more members and do so using scalable IP rather than Ethernet. VMware will be adding VXLAN support to its Hypervisor and the result would be a more scalable data center and cloud LAN architecture. The four obviously hope this will become a new model for addressing distributed cloud resources.
The initiative is more important for its goal than its methodology. We’re seeing network technology adapting to the cloud. That shouldn’t be surprising, nor should it be happening only now. The network creates the cloud; it’s the binding force that makes not only the resource pool possible but also makes its access possible. The network is the business case, the network is the business. But the network has been silent on the topic of the cloud. Maybe this is a sign that the silence is finally over.
Verizon has taken what may be a very important and evocative step toward maturing its enterprise cloud strategy with the purchase of privately held CloudSwitch. The significance of the move is hard to appreciate without an understanding of just what the heck CloudSwitch is, so I propose to start with that.
The classic vision of cloud computing is virtual something-hosting, where the “something” is anywhere from an entire application to the bare-bones machine image (SaaS down to IaaS, respectively). This model is useful as a way of looking at the cloud in isolation, but for most enterprises, the cloud in isolation isn’t very interesting. Since they don’t expect to migrate any more than a quarter of their IT spending to a public cloud, the key question for them is how you hybridize.
Microsoft is the only one of the currently popular cloud leaders that has taken the hybridization to heart from the first. The Azure cloud is a PaaS cloud that, with the help of the Azure Platform Appliance, which is a partner-delivered combination of Microsoft software and server and network hardware. With APA, a user can build an “Azure cloud” that seamlessly extends between an enterprise data center and a public cloud provider (Microsoft, of course, and in theory, other cloud providers who adopted the Azure architecture).
CloudSwitch can be visualized as a more generalized model of the same hybridization notion. With this approach, the user deploys a series of CloudSwitch Instances in the cloud and a CloudSwitch Appliance (which is a software component, not a gadget) in the data center. The appliance links to all of the instances in as many clouds as there are, and it essentially synchronizes each instance as a host for one or more virtual machines that are managed to be functionally identical with the applications’ resources in the enterprise. What you end up with is a kind of “envelope” that everything runs in and that can be made to extend to any number of clouds that can host a virtual machine. A secure Internet tunnel links the components of this architecture. Continued »
Apple’s Steve Jobs has finally decided that his health won’t permit him to head Apple and has passed control to Tom Cook, the Apple COO who has been the administrative head since Jobs took a leave early this year. I met Steve twice in my career, once very early in Apple’s rise and again after he’d brought the company back from the brink. There was no mistaking his innovative flair, then or now. While I’m sure that Apple management can run the company, I’m far less certain that it can run the market. Steve could, and did.
The move comes at a critical time for Apple. While the company has almost been the single-handed driver of the mobile revolution, the product cycles in that space are getting shorter, and it’s harder to say what the next generation of devices might be. A smartphone is a logical extension of a standard phone and one that exploits the broadband mobile connectivity that was already in place. A tablet is in many ways an extension of a smartphone. What extends the tablet? What is the Next Big Thing? The answer is the cloud, the mobile/behavioral ecosystem that will create the electronic virtual world we’ll all live in, in parallel with the real world. For Apple, it’s the iCloud, a course Steve Jobs has already charted.
Google knows that, of course, and sees a similar vision. One could argue that Google sees it even more clearly than Apple, in fact, because Apple’s culture has always been just a tad elitist and thus egocentric. Android and the MMI deal are Google’s appliance play, and for now, ChromeOS is carrying the flag of the cloud, in the form of hosting the thinnest of all possible clients. ChromeOS, in my view, is just a placeholder for an eventual shift toward a more Android-centric future, but one that focuses on exploiting Android as a cloud conduit, just as Apple wants iOS to be.
The thing is, the secret sauce of the future is the mobile/behavioral stuff, and neither Apple nor Google have any particular incumbency there. Nobody does, in fact. My work with operators suggests that they understand there’s a lot to be done and a lot of money to be made in the mobile/behavioral symbiosis. The problem they have is that this particular area of service innovation is even more vague than content monetization, and they can’t get anyone on the vendor side to talk effectively about content. What hope do they have for mobile? If you’re a vendor and if you want to own the market of the future, this is the problem you need to solve for your customers.
Interestingly, Alcatel-Lucent has just issued a press release calling for more thoughtful use of mobile assets in customer care, and when you read into the details, you see some of the elements of a mobile/behavioral solution at a more general level. The Alcatel-Lucent mantra is “contact me, connect me, know me,” and that is pretty much what I believe to be the key to mobile/behavioral opportunity. You have to be able to reach the customer proactively with social/behavioral changes to their virtual world, to connect them to the other partners (human or cloud-machine) in that world, and you have to know a lot about their interests, desires and prohibitions to make inferences about what’s best for them at that moment in time. I’d like to see Alcatel-Lucent take this story more into the general consumer market. I’d also like to see some competitors push the story even further.
It looks like there may be some hope for Clearwire; Sprint is said to be seeking cable partners to help fund a buyout of the firm. There’s some logic to this move, I think, because with mobile becoming the hottest spot in all of networking, the cable MSOs are generally without a mobile property. Cable operators need a mobile strategy that will let them into the game without each of them building out private mobile infrastructure.
The question is whether ganging up on a single solution for mobile is an answer. The cable companies are competitors in the major metro areas, and thus it’s hard to see them playing nice in the mobile space. The biggest problem would be the integration of TV-Everywhere content with mobile delivery when potentially several competitors in a given market have different rights under different terms.
The rumor is that Sprint would establish itself and Clearwire as a host for MSO MVNO relationships and that they would all deploy their own rights management layer on top. But what about the CDN space? CDNs for mobile service would be expected to require deep caching that’s tightly coupled to the backhaul network to optimize utilization and Quality of Experience.
Does that mean one shared CDN or would every MSO have to deploy its own? It’s hard to see how the latter would work, but also hard to see how they’d keep fairness in the first option, or even decide what “fair” meant. Weighted access based on cable customers, mobile customers, or what?
Well, revolutions are interesting at least, and we certainly have one now. HP is exiting the PC business, spinning off its PC unit and doing some M&A to boost itself as a player in the software and systems space. In fact, if you look at what seems the Plan of the Day, it seems as if HP wants to be Oracle; software-intensive and enterprise-focused. In its spring quarter, HP was hurt by the soft consumer PC market, where Dell (which had less consumer exposure) did better. Now with Dell taking an outlook hit and HP following suit (again), there was little the company could do except to admit that PCs were not now, nor ever in the future, what they used to be.
Tablets and smartphones aren’t in HP’s future either. HP is dumping the whole WebOS effort and all of the devices that came with it. The move is in some ways more dramatic than the decision to spin out PCs because it’s a retreat from the client business completely, a sharp turn toward the center of the action that would seem to be irreversible. Are they abandoning the client world to Asia or to Apple, or both.
To round out the move, HP will (buy a British software specialist, Autonomy, which has strong credentials in database searching and business, as well as some expertise in content management. The price for the software company seems high to Wall Street; it’s probably one reason why HP’s shares have been off in pre-market. It’s the price and not the concept. HP has been buying software companies for some time as a part of a transformation that started with the hiring of former SAP CEO Apotheker. Most IT players at this point realize that software is the key to establishing a direct connection to the users’ business case, and HP is proving its commitment to that approach. But remember that HP is still a broad-based data center player, and a giant in enterprise computing. Continued »
Apple’s iCloud is advancing quickly to production status, and with the progress comes more clarity into what the service will offer. Three things about iCloud caught my eye; Windows integration, the pricing for data storage and the potential competition with Microsoft’s Live strategy.
I’ve noted in the past that one of the biggest issues in cloud computing adoption, and one that is virtually never mentioned, is the cost of storage. Standard storage pricing from market leaders in the space would put the cost of a terabyte of storage at more than $1,000 a year, which is more than 10 times the cost of buying a terabyte drive and 20 times the marginal cost per terabyte for many data center disk arrays. With typical installed lives of three years, internal storage is then closing in on being ONE PERCENT of the cloud cost. Apple’s iCloud pricing sets an even higher price. At $100 for 50GB, a terabyte would cost $2,000 a year.
It doesn’t take rocket science to see that we’re pricing cloud storage an order of magnitude or more beyond the equivalent cost in the data center, and many cloud services also charge for outbound delivery. The rates could double effective storage cost just by churning that terabyte once per month. Thus current cloud pricing policies would discourage the deployment of enterprise mission-critical apps by pushing storage costs way above any possible point of justification. We’re creating cloud computing for the masses, but not for masses of data. Continued »