I was chatting up a solution provider last Friday about one of the stories I’m writing for this month, and we got to talking about the still-widening ripple effects from the hard drive assembly and component facilities flooded last month in Thailand.
As reported on SearchITChannel, the devastated area is responsible for a large portion of the industry’s hard-disk drive production, and companies like Western Digital and Seagate are having supply chain problems as a result. Now, market research firm International Data Corp. is reducing its outlook for both hard drives and personal computer shipments as a result of the natural disaster.
IDC said that during the first half of 2011, Thailand accounted for 40 percent to 45 percent of the worldwide production of hard disk drives. Almost half of that capacity was taken offline because of the flooding. (What hasn’t been flooded has been compromised by lack of access and electricity outages.) The shortages will continue a least into the first quarter of 2012, according to IDC. Here’s what else the research firm predicts:
- The impact on fourth-quarter PC shipments will be about 10 percent, because most of those units have already been produced or are in production.
- In a “worst-case” scenario, PC shipments for the first quarter of 2012 could be off by 20 percent.
- Hard-disk drive prices will rise, as demand outstrips supply. Note to self: Check into whether this dynamic motivates more production of configurations that include flash drives, unless (of course) they are produced in the same facilities.
- There could be some market share shifts as a result, so IT solution providers might wind up reconsidering their vendor suppliers on both a short-term and long-term basis.
- Pricing should be stabilized by June, but it could take until the second half of the year to ramp back up to typical production volumes.
Said John Rydning, IDC research vice president for hard-disk drives and semiconductors, in a statement:
“In response to the crisis, priority will be given to the large PC manufacturers that drive [hard-disk drive] shipment volumes as well as to the high-margin products used in enterprise servers and storage. But the [hard-disk drive] vendors can’t neglect their smaller customers, whose business will continue to be important once capacity is fully restored. Some interesting production and partnering arrangements with customers can be expected as [hard-disk drive] vendors scramble to bring production back up while simultaneously angling for strategic advantage.”
Market research firm Gartner is predicting that spending for security services will mushroom not just this year, but between now and 2015.
Of particular interest to managed service providers should be the fact that the managed portion of the security services pie is slated to almost double during that timeframe — from $8 billion to $14.9 billion by 2015. That $14.9 billion is part of an overall projected spending pie of $49.1 billion across the entire security services market by 2015, according to the Gartner report (“Forecast: Security Service Market, Worldwide, 2011”).
Gartner research director Lawrence Pingree said:
“[The uptick in managed security services] is largest driven by organizations looking at managed security services (MSS) providers as a way to maximize resources and lower ongoing operationg expenditures on security. Demand in the small and midsize business segments is also high as businesses continue looking to external parties to provide them with additional security expertise and resources that they may be lacking organizationally to help them make the right security decisions or provide security functions externally.”
North America was listed as the biggest market for security services spending. Revenue is expected to top $14.6 billion in 2012, growing to $19 billion by 2015. Those figures are for the overall security services marketing, which includes consulting, development and integration, management, software support, and hardware maintenance and support.
On the surface, the Open Compute Project — first announced by Facebook several months ago — is focused on sharing best practices and data center architecture approaches that can help data centers become more energy-efficient and “greener” overall.
But the theme of “open hardware” that dominated the latest summit held by the group in New York suggests that there is actually a much bigger movement afoot, one that I think could provide new momentum for system builders that integrate their own servers based on Intel technology.
Andy Bechtolsheim, chief development officer and chairman of Arista Networks (and, of course, one of the Sun Microsystems co-founders), said that information technology industry has a long history of standards development that has helped drive adoption and drive down costs. “What has been missing is a standard at the system level,” he told attendees of the second Open Compute Summit.
Bechtolsheim went on to criticize the “gratuitous differentiation” that distinguishes data center infrastructure technologies from each other and makes it tough for VARs and systems integrators — and businesses for that matter — to ensure interoperability. “This benefits the vendor more than the customer,” he said.
It is also a big reason that Facebook choose to build its own servers when constructing its data centers, said Frank Frankovsky, director of technical operations for Facebook executive, who founded the Open Compute Project and now sits on its board. Frankovsky’s fellow directors are Bechtolsheim; Don Duet, managing director with Goldman-Sachs; Mark Roenigk, chief operating officer of Rackspace Hosting; and Jason Waxman, general manager of high-density computing for the Intel data center group.
By thinking about the rack holistically (in effect, the rack is the new chassis), Frankovsky said Facebook was able to reduce the energy consumption of Facebook’s Prineville, Oregon, data center by 38 percent compared with existing data centers tasked with doing the same amount of work. The cost to build out that facility was 24 percent less, because Facebook exercised total control. Among other things, it opted for a 480-volt power distribution system to help reduce power losses during the conversion process and it reuses the hot aisle air to heat offices in the winter time.
Here’s the interesting part. As part of the Open Compute Project, Facebook plans to make its approaches available to the Open Compute Project community. This community will operate according to the model embraced by the Apache Software Foundation, adopting the contributions it deems appropriate. One of the early contributions are motherboards from ASUS. In addition, Red Hat has said it will ensure that it will support Red Hat Enterprise Linux on certified systems.
How far will the Open Compute Project reach? Frankovsky said that in order for “scale computing” — the infrastructure necessary to support the cloud computing movement — to succeed, the pace of hardware innovation needs to increase.
Open Compute encourages the best brains in the community share their ideas, including the best members of the white-box server channel. Other technology companies that have jumped on the bandwagon include Baidu, Cloudera, Dell, DRT, Future Facilities, Huawei, Hyve (Synnex), Mellanox, Nebula and Silicon Mechanics. Netflix, another company that relies on massive data centers, has also joined the community.
Nasuni, an infrastructure storage company that relies 100% on channel sales, has added multi-site access to its Data Continuity Services offering.
The new capability takes file-level snapshots of a customer’s data and puts it in the cloud with controllers at different offices. It then allows users to access and work with the same data from multiple locations.
Bill Trautman, director of storage technology at DataSpan, explained that the key to customer interest in multi-site capabilities is no longer worrying about syncing and moving their data to different physical sites and that data is always up to date. There’s also great granularity in customer control in that customers can grant access to data to whomever they please because they hold the encryption keys.
Nasuni partners don’t gain much margin on the product itself — their real business comes from services such as storage upgrades and renewals while building a loyal customer base. They will be able to sell the service on a terabyte-per-year basis and, according to Andres Rodriguez, Nasuni CEO, a midrange deal for partners would be $21,000 for three terabytes. Nasuni, which has 40 to 50 partners in North America, targets infrastructure partners that are able to sell and deploy storage and virtualization.
Trautman said that this is the right place and right time for unstructured data.
“Multi-site access will be huge in this market because customers will find a myriad of ways to use it. A number of them are looking for an unstructured data offering in the cloud,” Trautman said. “For us, it’s more service than offering, and it’s great because customers have the ability to use the amount of data they want when they need it.”
Nasuni mainly uses Amazon’s Simple Storage Service (S3) to store customer data to assure high availability and, though it hasn’t happened yet, is able to detect outages and issue 10 days of credit to customers in those instances. That 100% uptime guarantee for service-level agreements (SLAs) and help with cloud service providers are big pieces to the service.
“Nasuni handles customer questions such as ‘who’s going to be my back-end cloud provider’ and ‘what’s this going to cost me on a monthly basis’ and deals with the with the cloud provider-customer agreement for them,” Trautman said. “And 10 days of credit to customers who experience a failure is statement to the market that they’re serious.”
Citrix and VMware are both focusing on updates and technology developments that help make their virtualization platforms easier to configure, deploy and manage.
During its VMworld Europe conference last week, VMware introduced three new virtualization management offerings: a vCenter Operations update and new vFabric Application Management and IT Business Management suites. Here’s what is new:
- New licensing options, including one squarely focused on SMB and small vSphere deployment that includes just the vCenter Operations Manager for a price of $50 per virtual machine (VM)
- Application discovery and mapping, which shows which applications are running on which hosts; this is seen as an advance for backup and security policies
vFabric Application Management Suite
- Includes vFabric Application Director and vFabric Application Performance Manager; the latter offers insights about the performance of virtualized applications
- In the future, this suite will be integrated more tightly into AppInsight (a new product); VMware is offering promotions for users of its Hyperic technology, so there is a migration opportunity for IT solution providers
IT Business Management Suite
- This is the repackaged version of the Digital Fuel technology acquired by VMware earlier this year
- Offered as a service, the application allows non-IT business managers to look at the labor and technical costs associated with specific applications
Probably the biggest drawback for these new releases, due in late 2011 and early 2012, is that they don’t support heterogeneous hypervisors other than VMware’s own technology.
Several new technologies being announced this week by Citrix also are intended to ease management, although the focus on the desktop rather than the server.
At the center of the releases is an update to VDI-in-a-Box, which is a set of technologies for setting up virtual desktops. The release supports all three of the major hypervisors, including Citrix XenServer, Microsoft Hyper-V, and VMware’s vSphere, ESX and ESXi. It has also been integrated with Citrix GoToManage, a managed services platform that can be used to monitor and tune VDI-in-a-Box remotely.
Citrix has created a new partner designation in its Citrix Solution Advisor Program, called SMB Specialist, in order to support IT solution providers and managed services providers selling into this space. The company will begin certifying partners at this level in January.
Yet another open source project is going commercial — Nginx, the little Web server that could, seems to be picking up steam in the enterprise IT community.
Nginx (pronounced engine-x) Inc. has been moving forward with improvements that managed services providers (MSPs) and VARs delivering Web services will welcome. The open source Web server is already the power behind popular high-traffic sites such as Facebook and Hulu, among 40,000 other domains. While it still has a relatively small share of the Web server market, it is growing in popularity while Apache and Microsoft are losing market share.
When Nginx founder Igor Sysoev found his project becoming popular, maybe too popular, and started receiving feature requests from commercial users, he realized that it may be time to take his little project and go commercial. Sysoev created Nginx in 2004 with the aim of solving a problem he had with the current technology offerings. Now the startup has secured $3 million in Series A round funding and plans to offer its first commercial product in Q3 2012.
According to Andrew Alexeev, head of business development and marketing and Nginx, the focus of the commercial products is based on customer feedback and will include high availability, clustering, integration and performance management improvements and tools. In addition, the company is looking at the cloud for business opportunities.
“We are also targeting cloud infrastructure density and efficiency,” said Alexeev. “Nginx can conserve hardware performance and improve security.”
The first commercial product will be a connection processing and optimization software platform that enables advanced performance, traffic management, extended configuration and security features for hosting, and cloud and enterprise server infrastructure. The company will also offer an easy way for partners to migrate existing Web installations such as those on Apache to Nginx.
Dell partners should take note of this news as one of Nginx’s investors is a firm affiliated with MSD Capital, which is Dell chairman and CEO Michael Dell’s private investment firm. Alexeev shared that Nginx hopes to use this relationship to collaborate on the delivery of hardware services and management for customers.
The company will open its new San Francisco headquarters in Q4 2011.
Full disclosure: I write about green technology issues on a daily basis, so my decision to write for SearchITChannel about how IT solution providers are becoming involved with e-waste services was a very self-motivated and self-interested one.
But just in case you need more validation of the fact that businesses do, in fact, care about technology energy efficiency, materials make-up and so on, consider that massive technology distributor Ingram Micro has just signed a deal with EPEAT, the system that the federal government and an increasing number of companies are using to gauge the green credentials of the hardware they are interested in buying.
In case you don’t know EPEAT, the name is actually short for the Electronics Product Environmental Assessment Tool. The system covers a number of IT categories, including displays, integrated systems, notebooks, desktops, thin clients and workstations; in the future, it will cover things including printing and imaging devices, servers and mobile phones. There are more than 3,200 products covered in the database (from 48 different manufacturers).
EPEAT designates the green-ness of a given product, by looking at things such as energy efficiency, the materials used within the equipment, and the services that are offered around the product in terms of end-of-life management (including reuse or recycling). The system is used to help determine which products in a given category have a better story to tell with respect to some of those metrics.
Ingram Micro has already been integrating EPEAT information into its solution provider catalogs. Under the extended relationship, starting the fourth quarter, the distributor will be able to help solution providers become EPEAT Channel Partners. That means they will be officially qualified to “sell” the value of the EPEAT information. They will also be featured on the EPEAT Web site.
Whether or not your organization has a green agenda, some of the metrics covered under EPEAT such as energy efficiency and lifecycle management policies are more general issues of interest to a growing number of buyers. This alliance is a smart move on both the part of Ingram Micro, which can help provide a differentiator for some of its reseller customers, and for EPEAT, which can continue pushing its visibility out of government agencies and into the business world.
The BlackBerry mobile device may have gotten its start in the enterprise world, but Research in Motion is determined to help small and midsize businesses perceive its technology as indispensable.
The Canadian company signed a deal this week with distributor Tech Data to better support IT solution providers selling BlackBerry solutions into SMBs. Tech Data and its partner in this mobile venture, Brightstar, will facilitate the activation process, which can be a hang-up for solution providers seeking to include BlackBerry devices as part of a mobile solution but that haven’t previously been able to handle the transaction process very easily. The process is supported by TDMobility, a new service offered by ActivateIT (a joint venture of Tech Data and Brightstar).
Joe Quaglia, senior vice president of U.S. marketing for Tech Data, described the offering:
“RIM is a strategic vendor partner for our launch of TDMobility into the channel, and ActivateIT is key to making the complete solution possible. We formed a strategic alliance with Brightstar to enable just this kind of offering, and we’re excited to offer our reseller customers the opportunity to increase their footprint in the channel by making complete, end-to-end BlackBerry solutions more easily available.”
I have to admit, as cool as this announcement sounds, I find myself wondering: what took so long?
With due respect to Tech Data, RIM hasn’t seemed much interested in the value-added channel before, so I feel sort of cynical about its intentions. Especially given its recent travails. Still, TDMobility is definitely the sort of service that I hope the channel hears a lot more about as mobile device management becomes an increasingly complex proposition for SMBs.
VARs interested in VM backup and hosted services may want to keep an eye on Veeam and its ProPartner Service Provider program.
Veeam specializes in virtualization backup and management and the program has been in existence for 18 months and now has 1,000 partners.
According to Dan Timko, Director of Hosted Services for BlueWave Computing, an Atlanta-based VAR and IT services provider, a big draw for partners is the Veeam Backup & Replication product that’s included in the program. Timko said that the product, services and upsell capabilities it provides was BlueWave’s primary reason for joining the ProPartner Service Provider program.
The Backup & Replication product allows BlueWave, an Infrastructure as a Service company, to back up entire VMware deployments, take full snapshots and replicate VMs at its off-site facilities in Arizona.
“It helps reduce operating costs, and having better disaster recovery and backup has increased our revenue,” Timko said. “With [Veeam’s] licensing, there are no big, up-front purchases and we can sell per-customer VMs. If you don’t back up a customer’s VMs well, you won’t have that customer for very long.”
These VM backup and hosted services for customers have been a big growth area for BlueWave.
“Cloud services have been the fastest-growing part of our business and will be our biggest source of income over the next two-to-three years,” Timko said. “Customers are starting to ask more and more about hosting and the cloud.”
Mike Waguespack, director of emerging market development and global hosting for Veeam Software said that because of how easy the program is to join and the flexibility of licensing has helped it gain popularity.
“[VARs] can choose between CAPEX or OPEX models or have a combination of both where they use CAPEX for long-term customers and OPEX for the variable parts of their business,” Waguespack said.
As reported on SearchDataRecovery.com, Waguespack also said that Backup & Replication 6 will include Hyper-V support.
The latest version of the N-able Technologies managed service automation toolset adds remote monitoring for VMware, under the virtualization developer’s “VMware Ready” program.
N-central 8.1, which was released this week, can now be used to monitor the hardware components of ESX and ESXi servers, including power supplies, fans and RAID-related hardware. The platform also reports about the condition of disk subsystems. The idea is to ensure that the hardware underlying virtualized servers are in tip-top shape, lest an outage seriously impact performance.
The feature is being billed as an industry first among managed services automation platforms.
With another new N-central 8.1 enhancement, managed service providers (MSPs) can now control N-central via a mobile application for Android-based devices. (You can download the mobile app on the Android marketplace.)
The third big change in N-central 8.1 lies in the scheduling component. The platform includes a much broader range of scheduling options. What’s more, it can now be used to collect warranty information and create expiration alerts for equipment from Acer, Apple, Dell, Gateway, Hewlett-Packard, Lenovo and Toshiba.
The attention to automation should help MSPs focus staff on higher value technical services that could benefit from the attention of a human and not on routine concerns that can be automated. Said Karl Samborski, operations manager for Dynamic Strategies, an MSP in Cranbury, N.J.: “Using N-central, we’ve been able to lower our administrative costs, which means there’s more time to focus on customer service.”
MSPs that already use N-central and that have a current maintenance and support contract will receive the upgrade for free.