VMware is buying enterprise social/collaboration player Socialcast, and the move is demonstrating a lot of interesting things about the future of collaboration, virtualization, and of course VMware. We may be on the cusp of some interesting changes.
“Collaboration” is the term we apply to communication between workers aimed at reaching some collective decision or combined work product. Most collaboration today is still done face to face because the dynamic of real presence is the most effective in sharing information and reaching collective conclusions. However, the combination of improved communications technology (at lower cost) and increased dispersal of team members has been driving interest in virtual forms of collaboration. These have been further promoted by the rapid growth of smartphones and tablets that offer a wherever-you-are kind of link-up for workers.
It may also change if the basic notion of collaboration changes. We interact with each other based on cultural habits, and when we try to make collaboration work remotely, it’s logical to try to extend the communications interactions via telecom rather than to examine whether some different form of interaction would be smarter and more effectively extended over a network connection. Continued »
We’re continuing to see more developments in the cloud space that go beyond the hype and address some of the important issues. One in particular — the “Database.com” offering from Salesforce.com — is also demonstrating some important facts about the cloud and cloud services.
Cloud databases have been an issue of increasing importance because they’re essential for the cloudsourcing of any team or company application and because they represent a new dimension in security risk for enterprises. Amazon’s EBS was the proximate cause of that company’s recent cloud outage, so cloud databases also demonstrate the new dimension in vulnerability that this sort of distributed technology can bring. Enterprises need a way of harmonizing cloud use with data security or they’re not going to the cloud with anything that’s important, and that would relegate the cloud to hosting websites or testing/piloting applications.
Database.com is first an attempt to integrate strong security into a cloud DBMS (an RDBMS to be specific). It includes strong authentication at the API call level, meaning that every access attempt is verified, and by-row tabular security rights within the DBMS. All of this is good stuff, and for many enterprise applications, it will help relieve security fears. But it’s not enough by itself.
No matter what any vendor says, mission-critical enterprise data isn’t likely to go into the cloud. The career risks for anyone making that decision are profound, according to the results of our spring survey. None of the enterprises we asked said they believed they would cloudsource a mission-critical DBMS. So it would appear we’re at an impasse, right?
Verizon has taken yet another “leadership” step in defining how operators see their revenue futures. The company has indicated it would be likely acquiring small software companies to create SaaS offerings hosted on the Verizon cloud.
Everyone has been infatuated by the notion of cloud computing as anointing the small and destroying the strong—it’s been a kind of populist theme that’s evolved in parallel with the whole Internet revolution. The problem is that it’s not a practical vision of the market. The big money in cloud computing comes from two sources—PaaS-based offloading of SOA app components from enterprise data centers for backup and overflow work, and SaaS opportunities to SMBs and even some enterprises. The big money’s still out there, and Verizon wants it.
AT&T is moving in this area too, and one interesting development there is that the company is working on the issue of asset creation/exploitation and not just the issue of APIs or cloud architecture. AT&T is looking at how to take legacy assets in the OSS/BSS space and make them available as APIs for integration into higher-level services (by developers and, we’re told, by internal service architects). It also wants to integrate its smart appliances into its content services, not only as elements in a multi-screen strategy but as controlling tools to manage media and the experience. Finally, AT&T hopes to formulate a general-purpose HTML5-based architecture for its proto-smart-device GUI so that applications will run across the full range of stuff that’s rolling out.
Amazon is apparently getting into the tablet business, or so say several sources. My own view is that Amazon is going to compete with the Barnes & Noble Color Nook, a product that I’ve gotten myself and find enormously interesting, powerful and helpful. The issue here isn’t becoming a tablet player, it’s defending the ebook space against a competitor that’s using a tablet feature set to enhance e-reader value. Every Nook that’s sold is a B&N camel’s nose under the ebook opportunity tent. Amazon can’t sell Kindle books to that market, and of course B&N profits from the lock. So Amazon has to become a player in what I’ll call the “t-reader” space, a space that is almost a tablet but that lacks the ability to host competing e-reader software and so still locks the consumer in as a traditional e-reader would.
Amazon needs to make sure that it doesn’t lose customers to the Color Nook because it needs to be sure that it doesn’t let B&N create a legion of book-hungry semi-tablet enthusiasts that can’t get Kindle without rooting their Nooks.
The question is whether they can do something at this point, when the B&N device is already out there and competing effectively, without giving away too much and hurting their profits even if they win the t-reader race.
Some data from Nielson suggests that tablet users are perhaps more focused on social media than on streaming video. The data shows that while e-readers outnumber tablets by an enormous margin, people are relatively unlikely to be e-reading while watching TV, but are rather likely (presuming they have a tablet) to be using a tablet. It doesn’t take rocket science to figure out that if reading a book is difficult while the TV is on, reading an e-reader is likewise. However, it’s probably even more difficult to watch a video on a tablet while watching TV. This means that all these tablet-TV crossovers are really doing Facebook or Twitter, and the larger form factor makes social network access easier.
This doesn’t mean that video streaming to tablets is without adherents. Verizon is going to provide free hotspot services to offload traffic from its 3G/4G network, and that trend is accelerating worldwide. WiFi is a great strategy for pulling cellular traffic out of expensive 3/4G facilities in locations where users are likely to settle for a while. I’ve been calling tablet users “migratory” rather than mobile users because most tablet use will come in sites where users can sit and focus—home, work or hospitality. Some providers and some tablet players believe that there’s a strong tablet opportunity in WiFi alone, in fact.
Truth be told, we don’t know what the consumer will do with tablets—exactly—because the consumer doesn’t know. That’s the big challenge of the mobile broadband revolution. Continued »
Netflix has been named the number one source of downstream traffic in all of North America, accounting for just under a third of all bandwidth consumed. Obviously, that means that video accounts for the overwhelming majority of downstream traffic, since there are many other sources than Netflix.
This only further highlights the problem that network operators face. Not only are they being asked to capitalize increased traffic that their current all-you-can-eat pricing model doesn’t monetize, they’re subsidizing the cannibalization of their own TV revenue opportunity. That’s particularly true for the cable MSOs whose primary revenue stream has always been TV. The markets are getting close to breaking here.
Intel has embarked on what might be the biggest battle of its corporate life—the battle to become relevant in the embedded system and appliance space.
While Intel has a license to produce ARM chips, it realizes that exercising it isn’t the answer to getting into the smartphone and tablet spaces. Not only would it suffer in terms of profits after the license fees, it would be perpetuating someone else’s processor architecture in the hottest space in the market. But wanting relevance doesn’t mean you’ll get it.
The big barrier for Intel to cross is getting big-name appliance OSs, which I’ve been calling “embedded control OSs” or ECOSs, ported to its architecture. One reason why Intel got so into the MeeGo Linux model was that it could easily support the porting of that OS to its architecture. Intel can do the same with Android, and in fact are doing just that, but it’s harder to get iOS moved over; Apple is in sole control there.
However, even getting the OS ported isn’t going to solve the problem, because there are hundreds of smartphone and tablet models out there already and more arriving every day. Given that Intel won’t be ready with even a minimal offering until 2012 and won’t be competitive in performance until likely 2013 or even 2014, things could get tough.
Google’s developer conference generated a flood of news, which was a bit of news itself. There was a time when big announcements were linked to industry events like trade shows, but the new trend to link them to developer meetings shows a new dynamic in the industry. Actually, it shows a re-validation of an old one. In 1982, the PC wasn’t an objective competitor to Apple’s line, but it was an open platform that encouraged developers and even hardware add-ons at a time when Apple was at best reluctant in that space. (CIMI was an integrator at the time, and we could not join Apple’s developer program.)
The second major thrust in the Google stream after Android was Chrome OS and the “realization” of its thin-client promise. I put that term in quotes because the release of Chromebooks did instantiate a Chrome OS promise, but it may not have fulfilled it. The Chromebook has the features that were expected, meaning that it’s a thin client that integrates tightly with Google Docs, it provides a client for desktop virtualization (Citrix demoed this) for access to Windows apps, and you can even buy it on a license program, almost like a set-top box, bundled with the Google business services. So what’s not to like?
Well, for one thing, the price.
Among cloud players, IBM and Microsoft get the highest marks on practical cloud strategies, and the second-highest go to the common-carrier cloud services, even though AT&T’s cloud offerings are still developing and Verizon’s (via Terremark) are only recently branded by the carrier. The reason is that enterprises see any outsourcing as a risk, IT outsourcing as a conspicuous risk, and outsourcing of any critical applications or data as a risk bordering on unacceptable. They demand both a trusted partner and a credible strategy for risk management.
Right now, both the big vendors we’ve named offer both, and the carriers are trusted in terms of financial stability, professionalism and quality of infrastructure. Where vendors have the edge is in the planning of a cloud-ready IT commitment. I think that the latter is more important than most people realize; simple IaaS cloudsourcing doesn’t address enterprise needs except in development and pilot testing. Anything other than IaaS requires significant SOA-like integration, something IBM and Microsoft realize and others either don’t realize or don’t address.
The assertion that the Sony PlayStation Network hack was hosted on Amazon’s EC2 isn’t raising all that many hackles among the cloud promoters, but it has demonstrated to enterprises yet again the concept of “collective risk.” Continued »
Cisco reported its results, which the Street has described as those of a “company in transition.” I disagree; they’re the results of a market in transition and a company not yet transitioning. The signs of the conditions that have dumped Cisco’s stock and fortunes have been clearly visible for more than four years and alarmingly visible for two. You can’t hope your way out of market change; you have to do something proactive.
Financial analyst comments on the Cisco situation had some common themes. It’s clear, they say, that profit margins are collapsing under competitive pressure. It’s clear, they say, that Cisco can’t be a “growth company” any more. It’s clear that major cost-cutting is in order. Some say it’s clear that to obtain shareholder value, Cisco needs to split up.
None of this clarity is clear to me, frankly.
“Competitive pressure” is an effect, not a cause. Price differentiation comes out of the absence of feature differentiation. For years now, router and switching vendors have pushed more and more arcane bit-pushing tricks as “differentiators” when none of the buyers believed or even understood the points. All this time, buyers have cried for substantive strategies to lead them through the transitions of networking—and didn’t get them. Cisco more than its competitors relied on sales pressure and incumbency. That won’t work when all the buyer wants is the lowest price for a hamburger. If you want the buyer to get off the hamburger kick, you need to have a strategy driver to push. Continued »