When your small, family-owned business has been around for more than 100 years AND it has been successful up to this point, it’s kind of a big deal for you to change the way you do things. Don’t you think?
But Carlo’s Bakery, a Hoboken bakery that has received national attention thanks to some well-timed media coverage and the reality TV show fame of master baker Bartolo Valastro, is ready to change the recipe for how it runs its back-office operations — along with the advice of New Jersey managed service provider and IT services company, Exigent Technologies. Paper and pen weren’t cutting it anymore.
Turns out that Exigent Technologies has been tapped to install a completely new infrastructure at Carlo’s Bakery’s 30,000-square-foot location in Jersey City and at its original Hoboken location. The “technology makeover” will include a virtualized network infrastructure, storage area network, printers, security software, servers and client systems (both desktops and notebook computers). Yep, the works. Carlo’s Bakery is investing in Exigent Technologies’s primary managed services offerings to pull this off: the ASSURANCE fixed-fee IT service plan and the PREVENT managed backup and disaster recovery service for small businesses.
Here’s what Leo Minervini, vice president of technology for Carlo’s Bakery, had to say about the plan:
“Investing in IT and selecting a local IT service provider wasn’t a decision we took lightly. Exigent Technologies earned our business every step of the way. From the very first call, the team at Exigent Technologies was clearly a cut above the rest and possessed the genuine enthusiasm, values and proven technology expertise we were looking for in an IT services provider.”
I invite you to reread that last sentence and to reflect on what gets listed first as the factors that inspired the decision: “enthusiasm” and “values.” Not “proven technology expertise.”
As your IT services organization gears up for the end-of-the-year push to finish the year with a bang, it is a great reminder of what differentiates the great MSPs, VARs, resellers, IT solution providers — whatever you want to call yourself — from the rest. It isn’t the skills. Those are a given. It is the attitude that your team displays to your customers.
Think about it.
I am only somewhat embarrassed to admit that I have a desk drawer full of mobile gadgets that I probably don’t use as much as I should, including BOTH an Amazon Kindle (an older one) and a first generation Apple iPad. It is rare that I carry both of those devices on a trip, but I still do carry one or the other, depending on whether or not I have to do any writing work while I am on the road.
I bring this up because I just read some new data released by research firm ChangeWave Research ( a division of The 451 Group) that suggests the new Amazon Kindle Fire is already much more of a threat to the iPad in just a month of existence than the Samsung Galaxy Tab, which has been on the market for a year.
The report is based on surveys of 3,043 consumers during early November. It shows the Amazon Kindle Fire represents a serious competitive threat to the iPad, at least in North America, where the poll was conducted. Approximately two-thirds of those surveyed expressed an interest in buying the iPad, while about 22 percent said they were interested in a Kindle Fire. No other tablet garnered more than 1 percent of the responses, according to the report.
The data is just another demonstration that Amazon is far, far more than a really efficient online retailer.
Whether it was by accident or design, the company is now at the center of the hottest technology segment since the original personal computers prompted businesses to rethink the way their employees did work. With its vast knowledge of consumer behavior, Amazon represents a far more credible threat to Apple than many of the technology vendors that got their start on the business-to-business side of the world.
Yes, you’re right. No IT service provider will get rich selling either tablets or e-readers, but there are rich managed services opportunities in the field of mobile device management. The Apple iPad, as an example, is a serious factor in healthcare IT environments as doctors and other clinical professionals seek ways to increase patient satisfaction. I expect the Kindle Fire will soon begin becoming a factor, especially when you consider all the text books and medical journals that the healthcare industries consults and reads.
Amazon is very relevant for another reason, of course: It is a serious contender in the infrastructure as a service (IaaS) portion of the cloud computing marketplace.
There are few companies in the channel that could hope to compete with Amazon’s ability to scale; although, on the flip side, Amazon will find it tough to contend with the channel’s ability to offer cloud infrastructure specifically customized for certain verticals along with the personalized service and support that many SMBs need. Of course, Amazon might also be a very relevant infrastructure partner for some aspects of the IT solution provider channel.
The emergence of the Amazon Kindle Fire is another reminder that Amazon is far more than just another e-commerce company. This is a technology company to be reckoned with in both mobility and cloud infrastructure, and IT solution providers are advised to keep close tabs on its plans.
I was chatting up a solution provider last Friday about one of the stories I’m writing for this month, and we got to talking about the still-widening ripple effects from the hard drive assembly and component facilities flooded last month in Thailand.
As reported on SearchITChannel, the devastated area is responsible for a large portion of the industry’s hard-disk drive production, and companies like Western Digital and Seagate are having supply chain problems as a result. Now, market research firm International Data Corp. is reducing its outlook for both hard drives and personal computer shipments as a result of the natural disaster.
IDC said that during the first half of 2011, Thailand accounted for 40 percent to 45 percent of the worldwide production of hard disk drives. Almost half of that capacity was taken offline because of the flooding. (What hasn’t been flooded has been compromised by lack of access and electricity outages.) The shortages will continue a least into the first quarter of 2012, according to IDC. Here’s what else the research firm predicts:
- The impact on fourth-quarter PC shipments will be about 10 percent, because most of those units have already been produced or are in production.
- In a “worst-case” scenario, PC shipments for the first quarter of 2012 could be off by 20 percent.
- Hard-disk drive prices will rise, as demand outstrips supply. Note to self: Check into whether this dynamic motivates more production of configurations that include flash drives, unless (of course) they are produced in the same facilities.
- There could be some market share shifts as a result, so IT solution providers might wind up reconsidering their vendor suppliers on both a short-term and long-term basis.
- Pricing should be stabilized by June, but it could take until the second half of the year to ramp back up to typical production volumes.
Said John Rydning, IDC research vice president for hard-disk drives and semiconductors, in a statement:
“In response to the crisis, priority will be given to the large PC manufacturers that drive [hard-disk drive] shipment volumes as well as to the high-margin products used in enterprise servers and storage. But the [hard-disk drive] vendors can’t neglect their smaller customers, whose business will continue to be important once capacity is fully restored. Some interesting production and partnering arrangements with customers can be expected as [hard-disk drive] vendors scramble to bring production back up while simultaneously angling for strategic advantage.”
Market research firm Gartner is predicting that spending for security services will mushroom not just this year, but between now and 2015.
Of particular interest to managed service providers should be the fact that the managed portion of the security services pie is slated to almost double during that timeframe — from $8 billion to $14.9 billion by 2015. That $14.9 billion is part of an overall projected spending pie of $49.1 billion across the entire security services market by 2015, according to the Gartner report (“Forecast: Security Service Market, Worldwide, 2011”).
Gartner research director Lawrence Pingree said:
“[The uptick in managed security services] is largest driven by organizations looking at managed security services (MSS) providers as a way to maximize resources and lower ongoing operationg expenditures on security. Demand in the small and midsize business segments is also high as businesses continue looking to external parties to provide them with additional security expertise and resources that they may be lacking organizationally to help them make the right security decisions or provide security functions externally.”
North America was listed as the biggest market for security services spending. Revenue is expected to top $14.6 billion in 2012, growing to $19 billion by 2015. Those figures are for the overall security services marketing, which includes consulting, development and integration, management, software support, and hardware maintenance and support.
On the surface, the Open Compute Project — first announced by Facebook several months ago — is focused on sharing best practices and data center architecture approaches that can help data centers become more energy-efficient and “greener” overall.
But the theme of “open hardware” that dominated the latest summit held by the group in New York suggests that there is actually a much bigger movement afoot, one that I think could provide new momentum for system builders that integrate their own servers based on Intel technology.
Andy Bechtolsheim, chief development officer and chairman of Arista Networks (and, of course, one of the Sun Microsystems co-founders), said that information technology industry has a long history of standards development that has helped drive adoption and drive down costs. “What has been missing is a standard at the system level,” he told attendees of the second Open Compute Summit.
Bechtolsheim went on to criticize the “gratuitous differentiation” that distinguishes data center infrastructure technologies from each other and makes it tough for VARs and systems integrators — and businesses for that matter — to ensure interoperability. “This benefits the vendor more than the customer,” he said.
It is also a big reason that Facebook choose to build its own servers when constructing its data centers, said Frank Frankovsky, director of technical operations for Facebook executive, who founded the Open Compute Project and now sits on its board. Frankovsky’s fellow directors are Bechtolsheim; Don Duet, managing director with Goldman-Sachs; Mark Roenigk, chief operating officer of Rackspace Hosting; and Jason Waxman, general manager of high-density computing for the Intel data center group.
By thinking about the rack holistically (in effect, the rack is the new chassis), Frankovsky said Facebook was able to reduce the energy consumption of Facebook’s Prineville, Oregon, data center by 38 percent compared with existing data centers tasked with doing the same amount of work. The cost to build out that facility was 24 percent less, because Facebook exercised total control. Among other things, it opted for a 480-volt power distribution system to help reduce power losses during the conversion process and it reuses the hot aisle air to heat offices in the winter time.
Here’s the interesting part. As part of the Open Compute Project, Facebook plans to make its approaches available to the Open Compute Project community. This community will operate according to the model embraced by the Apache Software Foundation, adopting the contributions it deems appropriate. One of the early contributions are motherboards from ASUS. In addition, Red Hat has said it will ensure that it will support Red Hat Enterprise Linux on certified systems.
How far will the Open Compute Project reach? Frankovsky said that in order for “scale computing” — the infrastructure necessary to support the cloud computing movement — to succeed, the pace of hardware innovation needs to increase.
Open Compute encourages the best brains in the community share their ideas, including the best members of the white-box server channel. Other technology companies that have jumped on the bandwagon include Baidu, Cloudera, Dell, DRT, Future Facilities, Huawei, Hyve (Synnex), Mellanox, Nebula and Silicon Mechanics. Netflix, another company that relies on massive data centers, has also joined the community.
Nasuni, an infrastructure storage company that relies 100% on channel sales, has added multi-site access to its Data Continuity Services offering.
The new capability takes file-level snapshots of a customer’s data and puts it in the cloud with controllers at different offices. It then allows users to access and work with the same data from multiple locations.
Bill Trautman, director of storage technology at DataSpan, explained that the key to customer interest in multi-site capabilities is no longer worrying about syncing and moving their data to different physical sites and that data is always up to date. There’s also great granularity in customer control in that customers can grant access to data to whomever they please because they hold the encryption keys.
Nasuni partners don’t gain much margin on the product itself — their real business comes from services such as storage upgrades and renewals while building a loyal customer base. They will be able to sell the service on a terabyte-per-year basis and, according to Andres Rodriguez, Nasuni CEO, a midrange deal for partners would be $21,000 for three terabytes. Nasuni, which has 40 to 50 partners in North America, targets infrastructure partners that are able to sell and deploy storage and virtualization.
Trautman said that this is the right place and right time for unstructured data.
“Multi-site access will be huge in this market because customers will find a myriad of ways to use it. A number of them are looking for an unstructured data offering in the cloud,” Trautman said. “For us, it’s more service than offering, and it’s great because customers have the ability to use the amount of data they want when they need it.”
Nasuni mainly uses Amazon’s Simple Storage Service (S3) to store customer data to assure high availability and, though it hasn’t happened yet, is able to detect outages and issue 10 days of credit to customers in those instances. That 100% uptime guarantee for service-level agreements (SLAs) and help with cloud service providers are big pieces to the service.
“Nasuni handles customer questions such as ‘who’s going to be my back-end cloud provider’ and ‘what’s this going to cost me on a monthly basis’ and deals with the with the cloud provider-customer agreement for them,” Trautman said. “And 10 days of credit to customers who experience a failure is statement to the market that they’re serious.”
Citrix and VMware are both focusing on updates and technology developments that help make their virtualization platforms easier to configure, deploy and manage.
During its VMworld Europe conference last week, VMware introduced three new virtualization management offerings: a vCenter Operations update and new vFabric Application Management and IT Business Management suites. Here’s what is new:
- New licensing options, including one squarely focused on SMB and small vSphere deployment that includes just the vCenter Operations Manager for a price of $50 per virtual machine (VM)
- Application discovery and mapping, which shows which applications are running on which hosts; this is seen as an advance for backup and security policies
vFabric Application Management Suite
- Includes vFabric Application Director and vFabric Application Performance Manager; the latter offers insights about the performance of virtualized applications
- In the future, this suite will be integrated more tightly into AppInsight (a new product); VMware is offering promotions for users of its Hyperic technology, so there is a migration opportunity for IT solution providers
IT Business Management Suite
- This is the repackaged version of the Digital Fuel technology acquired by VMware earlier this year
- Offered as a service, the application allows non-IT business managers to look at the labor and technical costs associated with specific applications
Probably the biggest drawback for these new releases, due in late 2011 and early 2012, is that they don’t support heterogeneous hypervisors other than VMware’s own technology.
Several new technologies being announced this week by Citrix also are intended to ease management, although the focus on the desktop rather than the server.
At the center of the releases is an update to VDI-in-a-Box, which is a set of technologies for setting up virtual desktops. The release supports all three of the major hypervisors, including Citrix XenServer, Microsoft Hyper-V, and VMware’s vSphere, ESX and ESXi. It has also been integrated with Citrix GoToManage, a managed services platform that can be used to monitor and tune VDI-in-a-Box remotely.
Citrix has created a new partner designation in its Citrix Solution Advisor Program, called SMB Specialist, in order to support IT solution providers and managed services providers selling into this space. The company will begin certifying partners at this level in January.
Yet another open source project is going commercial — Nginx, the little Web server that could, seems to be picking up steam in the enterprise IT community.
Nginx (pronounced engine-x) Inc. has been moving forward with improvements that managed services providers (MSPs) and VARs delivering Web services will welcome. The open source Web server is already the power behind popular high-traffic sites such as Facebook and Hulu, among 40,000 other domains. While it still has a relatively small share of the Web server market, it is growing in popularity while Apache and Microsoft are losing market share.
When Nginx founder Igor Sysoev found his project becoming popular, maybe too popular, and started receiving feature requests from commercial users, he realized that it may be time to take his little project and go commercial. Sysoev created Nginx in 2004 with the aim of solving a problem he had with the current technology offerings. Now the startup has secured $3 million in Series A round funding and plans to offer its first commercial product in Q3 2012.
According to Andrew Alexeev, head of business development and marketing and Nginx, the focus of the commercial products is based on customer feedback and will include high availability, clustering, integration and performance management improvements and tools. In addition, the company is looking at the cloud for business opportunities.
“We are also targeting cloud infrastructure density and efficiency,” said Alexeev. “Nginx can conserve hardware performance and improve security.”
The first commercial product will be a connection processing and optimization software platform that enables advanced performance, traffic management, extended configuration and security features for hosting, and cloud and enterprise server infrastructure. The company will also offer an easy way for partners to migrate existing Web installations such as those on Apache to Nginx.
Dell partners should take note of this news as one of Nginx’s investors is a firm affiliated with MSD Capital, which is Dell chairman and CEO Michael Dell’s private investment firm. Alexeev shared that Nginx hopes to use this relationship to collaborate on the delivery of hardware services and management for customers.
The company will open its new San Francisco headquarters in Q4 2011.
Full disclosure: I write about green technology issues on a daily basis, so my decision to write for SearchITChannel about how IT solution providers are becoming involved with e-waste services was a very self-motivated and self-interested one.
But just in case you need more validation of the fact that businesses do, in fact, care about technology energy efficiency, materials make-up and so on, consider that massive technology distributor Ingram Micro has just signed a deal with EPEAT, the system that the federal government and an increasing number of companies are using to gauge the green credentials of the hardware they are interested in buying.
In case you don’t know EPEAT, the name is actually short for the Electronics Product Environmental Assessment Tool. The system covers a number of IT categories, including displays, integrated systems, notebooks, desktops, thin clients and workstations; in the future, it will cover things including printing and imaging devices, servers and mobile phones. There are more than 3,200 products covered in the database (from 48 different manufacturers).
EPEAT designates the green-ness of a given product, by looking at things such as energy efficiency, the materials used within the equipment, and the services that are offered around the product in terms of end-of-life management (including reuse or recycling). The system is used to help determine which products in a given category have a better story to tell with respect to some of those metrics.
Ingram Micro has already been integrating EPEAT information into its solution provider catalogs. Under the extended relationship, starting the fourth quarter, the distributor will be able to help solution providers become EPEAT Channel Partners. That means they will be officially qualified to “sell” the value of the EPEAT information. They will also be featured on the EPEAT Web site.
Whether or not your organization has a green agenda, some of the metrics covered under EPEAT such as energy efficiency and lifecycle management policies are more general issues of interest to a growing number of buyers. This alliance is a smart move on both the part of Ingram Micro, which can help provide a differentiator for some of its reseller customers, and for EPEAT, which can continue pushing its visibility out of government agencies and into the business world.
The BlackBerry mobile device may have gotten its start in the enterprise world, but Research in Motion is determined to help small and midsize businesses perceive its technology as indispensable.
The Canadian company signed a deal this week with distributor Tech Data to better support IT solution providers selling BlackBerry solutions into SMBs. Tech Data and its partner in this mobile venture, Brightstar, will facilitate the activation process, which can be a hang-up for solution providers seeking to include BlackBerry devices as part of a mobile solution but that haven’t previously been able to handle the transaction process very easily. The process is supported by TDMobility, a new service offered by ActivateIT (a joint venture of Tech Data and Brightstar).
Joe Quaglia, senior vice president of U.S. marketing for Tech Data, described the offering:
“RIM is a strategic vendor partner for our launch of TDMobility into the channel, and ActivateIT is key to making the complete solution possible. We formed a strategic alliance with Brightstar to enable just this kind of offering, and we’re excited to offer our reseller customers the opportunity to increase their footprint in the channel by making complete, end-to-end BlackBerry solutions more easily available.”
I have to admit, as cool as this announcement sounds, I find myself wondering: what took so long?
With due respect to Tech Data, RIM hasn’t seemed much interested in the value-added channel before, so I feel sort of cynical about its intentions. Especially given its recent travails. Still, TDMobility is definitely the sort of service that I hope the channel hears a lot more about as mobile device management becomes an increasingly complex proposition for SMBs.