Box today has hired the former CEO of EMC Syncplicity.
Jeetu Patel, the former general manager of EMC Syncplicity enterprise file synchronization and sharing (EFSS) business unit, has joined the competing company as the senior platform and chief strategy officer at Box. He will report to CEO Aaron Levie. Patel will lead Box’s platform organization while driving the strategy behind the platform business and developer relations.
The company’s corporate development organization, led by Villi Iltchev, senior vice president of corporate development, will report to Patel.
The news comes several weeks after EMC sold the Syncplicity sync-and-share business to Skyview Capital, a private investment firm based in Los Angeles that holds a portfolio of enterprise tech companies centered on mobile and networking.
EMC acquired Syncplicity in 2012, as the online file sharing market started to become popular. However, EMC decided the business was a poor fit with its overall sales model and its storage and IT infrastructure focus. Paterl did not make the move with Syncplicity to Skyview.
Jonathan Huberman, a former EMC executive, is Syncplicty’s new CEO.
Box is a file-sharing and collaboration company and is one of the start-ups that pioneered file sync-and-share technologies. The company claims its ecosystem has grown to include 50,000 developers and serves 4.5 million billion third-party API calls per month. This past April, Box announced its Box Developer Edition that will leverage all of the company’s core enterprise-grade functionality.
The cloud computing company offers three account types to users that include Enterprise, Business and Personal. Each account type has features such as unlimited storage, custom branding and administrative controls. Applications such as Google apps, NetSuite and Salesforce can be integrated with Box.
Syncplicity, founded in 2008, competes in the same market along with Dropbox, Watchdox, Egnyte, Citrix Systems, Accellion, Ctera, Microsoft OneDrive for Business, Google Drive and other smaller vendors.
Patel held several leadership roles in EMC’s Information Intelligence Group, including chief marketing officer, chief strategy officer and chief technology officer. Prior to joining EMC, he was president of Doculabs, a research and advisory firm focused on collaboration and content management across a range of industries, including financial services, insurance, energy, manufacturing, and life sciences.
Today Nutanix said a leading virtual desktop infrastructure (VDI) application will work with its Acropolis software. No, it’s not VMware Horizon. The partner is Citrix, which said it will support Acropolis on XenDesktop and Citrix XenApp products along with NetScaler and ShareFile.
VDI is a key use case for Nutanix hyper-converged products. Most of its customers use VMware hypervisors, but Nutanix is positioning itself to provide an alternative to VMware. VMware started competing with Nutanix when it launched its Virtual SAN (VSAN) hyper-converged software in 2014, and Nutanix answered by developing its own hypervisor this year.
“This is a big milestone, because until now customers probably would not have deployed XenDesktop on our Acropolis hypervisor,” said Howard Ting, nutanix senior vice president of marketing. “We need to build out our [Acropolis] ecosystem. VDI is important to us and naturally, you can assume we will not get VMware Horizon support. We felt Citrix support was critical.”
The Acropolis architecture lets customers move from one hypervisor to another, so a Nutanix customer running XenDesktop with VMware vSphere can move it to the Acropolis hypervisor.
Symantec’s spin-out of Veritas took a twist today when The Carlyle Group and other investors agreed to purchase Veritas for $8 billion. That’s $5.5 billion less than Symantec paid for Veritas in a 2005 blockbuster acquisition.
Symantec was preparing to spin-out its Veritas backup and storage management division with a target date of January 2016 for Veritas to become a separate public company. The Carlyle sale is expected to close around the end of 2015, so the timeline should remain about the same. Veritas will be private under Carlyle’s ownership.
The move wasn’t surprising because it was known that Symantec was shopping Veritas while planning the spin-out. Along with an $8 billion cash payout, Symantec CEO Michael Brown said the sale “helps us simplify the separation process.”
During a conference call today to discuss the deal and quarterly earnings, Brown emphasized that the sale will be good for Symantec shareholders. It’s unclear how it will affect Veritas, which was already far along the path of preparing to become its own company.
The Carlyle Group did name the new leaders of Veritas. Bill Coleman will be CEO and Bill Krause will become chairman when the deal closes. Coleman was a founder and CEO of BEA Systems, an enterprise software vendor acquired by Oracle in 2008. He also was CEO of cloud software vendor Cassatt, and an executive at Sun and Visicorp. He has been a partner with venture capitalist firm Alsop Louie Partners the past five years.
Krause is a Carlyle executive and board partner for VC firm Andreessen Horowitz, and director of SAN switching vendor Brocade and several other companies. He is former CEO of networking vendor 3Com, which Hewlett-Packard acquired.
A quote from Carlyle’s press release on the deal makes it sound as if Coleman will proceed with the strategy Veritas has been implementing. Coleman said he looks forward to partnering with current Veritas GM John Gannon, chief product officer Matt Cain and VP of worldwide sales Brett Shirk “and the rest of the existing leadership team to establish Veritas as a free-standing company and reinvigorate our culture to drive innovation and value creation.”
Brown said revenue from Symantec and Veritas took a hit last quarter, and he blamed it on a sales force realignment along with executives performing due diligence for the Veritas sale.
Information management revenue of $587 million fell 10 percent from last year, despite double-digit growth in NetBackup software and appliances.
Add Dot Hill Systems to the list of smaller storage vendors who are growing revenue significantly while their large rivals stumble.
Dot Hill is unlike the other vendors who are bucking the trend of declining or flat revenues experienced by EMC, NetApp, Hitachi Data Systems, Hewlett-Packard and IBM. The big difference is it has been around since 1984. In tech years, that makes Dot Hill old enough to be the grandfather to the likes of Nutanix, Nimble Storage, Pure Storage and Tintri. And it has been through some rough times, unlike its younger rivals.
Dot Hill also has a different business model. It sells most of its systems anonymously through OEM partners who re-brand its storage. HP’s MSA platform and Lenovo’s S3200 and S2200 are versions of Dot Hill’s AssuredSAN arrays. Quantum also uses Dot Hill storage for its StorNext scale-out systems and DXi disk backup targets. Dot Hill has other OEM and channel partners who build systems tailored for vertical industries, most notable telecom, gas and oil, data analytics, media and entertainment, and high performance computing.
But Dot Hill is growing like a startup. Today the vendor reported $60.06 million in revenue for last quarter, up 25 percent from last year and above expectations. Dot Hill also exceeded Wall Street expectations with non-GAAP income of $3.9 million, which tripled its $1.3 million in income from the same quarter last year. Dot Hill’s forecast for 2015 revenue is from $245 million to $260 million, compared to $217.7 million for 2014.
The growth and optimism comes despite what Dot Hill CEO Dana Kammersgard admitted were headwinds in the storage industry that “are not abating and are likely to exist through 2016.”
So why is Dot Hill bucking that trend? Kammersgard said part of it has to do with its technology. Dot Hill used to mainly supply the hardware for its partners but has added software such as RealStor for autonomic real-time tiering in recent years. Kammersgard said RealStor helped win OEM deals and boost sales for its partners.
He called RealStor’s tiering a disruptive technology for hybrid flash arrays. “The fact that we [tier data] invisibly and in real-time is a significant disruption to the next-generation and traditional storage hierarchy,” Kammersgard said.
Kammersgard also attributed Dot Hill’s channel strategy for its growth. Dot Hill patiently cultivates OEM partners – particularly in vertical markets – that most storage vendors don’t chase because it could take more than a year to get products into the market.
“Traditional storage companies are focused on data centers and the cloud,” Kammersgard said. “They’re not focused on the line of business product portfolio for these [vertical] companies. There is a lack of inclination to pursue OEM business. Large companies like EMC, NetApp and Hitachi are slow to move and inflexible, and not suited to go the OEM route. The new guys like Nimble are basically about sales at any cost, growth at any cost, so they are not suited to taking 18 months to close a deal.”
Tintri, which sells storage built for virtual machines, today closed a $125 million funding round that it plans to use to step up its attack on NetApp’s installed base and take the startup public in 2016.
Tintri’s Series F round brings its total funding to $260 million. Its founder and CTO Kieran Harty said Tintri will use the funding to grow its sales and marketing to continue to make inroads in a storage market in transition. He sees Tintri as one of the hot newcomers that are hurting the established storage vendors. As a file storage vendor built for virtual environment, Tintri’s main target is NetApp, which has struggled for more than a year.
“We’re going squarely against NetApp,” Harty said. “We will be going up against NetApp aggressively, and you’ll see that in our marketing. You can’t avoid EMC in the market, but we don’t go after EMC specifically. We are seeing the market separate newer companies who are large enough and have enough capital to be players in the market while others are falling behind. We see Nimble, Pure and Nutanix, and we don’t see much of anybody else.”
Harty said Tintri’s goal is to become cash-flow positive in 2016, and then move towards an IPO. Tintri executives last year said the company would have an initial public offering (IPO) in 2015 but market conditions are unfavorable for an IPO. Investors want to see profitability, not only growth.
Industry sources say Tintri generated around $50 million in revenue in 2014 with plans to double that in 2015. Tintri said it grew revenues more than 100 percent in 2014. The company has not had a profitable quarter, however.
“We want to be a public company. We expect to be in position to start that process at the end of this year,” Harty said. “Our focus is on going public when we’re ready and when the market is ready. There is no longer just a focus on growth. The market cares about being cash-flow positive and seeing that you will be profitable. That’s the new reality of the market.”
He said Tintri plans to add “a few hundred people” to its current 450 employees with the new funding. Along with beefing up sales and marketing, Tintri is also planning a new product launch for later this month.
Silver Lake Partners led the funding round with previous investors Insight Venture Partners, Lightspeed Ventures, Menlo Ventures and NEA participating.
Amazon his week made enhancements and added features to its Amazon Simple Storage Service (S3) and its Amazon Aurora relational database. The S3 has been upgraded with more event notifications along with support for bucket-level Amazon CloudWatch metrics while Amazon Aurora now comes with zero-downtime migration.
Customers now can get notifications when an object has been deleted from an S3 bucket. The company launched its event notifications model last year, with support for when objects are created through Puts, Posts and Copy.
Amazon has been reducing prices on its cloud storage and added features such as reduced redundancy storage model, VPC endpoints and cross-region replication. The event notifications are its latest improvements.
“You can now arrange to be notified when an object has been deleted from an S3 bucket,” according to the AWS blog. “Like the other types of notifications, delete notifications can be delivered to an SQS queue or an SNS topic or used to invoke an AWS Lambda function. The notification indicates that a Delete operation has been performed on an object, and can be used to update any indexing or tracking data that you maintain for your S3 objects.”
The Amazon CloudWatch is designed to track metrics for AWS services and for customers’ applications. Customers now can monitor and set alarms on their S3 storage usage. The CloudWatch allows customers to set alarms to trigger when a metric reaches past a specified limit.
“Available metrics include total bytes (Standard and Reduced Redundancy Storage) and total number of objects, all on a per-bucket basis. You can find the metrics in the AWS Management Console,” according to the AWS blog.
Amazon also unveiled enhancements to its Amazon Aurora, which is designed for high performance and scales up to 64 TBs of storage. The Aurora is a MySQL database engine for Amazon. The database was launched last year and Amazon is enhancing the product based on customers’ input. Amazon has added zero-downtime migration to Aurora.
“Immediately after you migrate, you will begin to benefit from Amazon Aurora’s high throughput, security, and low cost,” according to the blog. “You will be in a position to spend less time thinking about the ins and outs of database scaling and administration, and more time to work on your application code.”
The Amazon Aurora also has been enhanced with replication capabilities. Each Amazon Aurora instance can have up to 15 replicas that add additional read capacity.
“It means that Amazon Aurora is able to handle far more concurrent queries (both read and write) than other products,” according to the blog. “Amazon Aurora’s unique, highly paralleled access to storage reduces contention for stored data and allows it to process queries in a highly efficient fashion.”
Amazon also is launching this in the East and West regions in North America and Europe regions, and intends to expand to other locations over time.
Amazon’s announcement comes on the heals of Google’s launch of its Google Cloud Storage Nearline archiving service, which is a sign that Google is making an aggressive run at AWS and Microsoft Azure. Google Cloud Storage Nearline currently is in beta and its low-cost archive cloud boasts a data-retrieval of three seconds, while Amazon Glacier has a restore time ranging from three hours to five hours.
Add Quantum to the list of storage vendors that struggled last quarter.
Quantum Thursday reported revenue of revenue of $110.9 million, down around $17 million from last year and below its original forecast. The company lost $10.1 million, down from a loss of $4.3 million last year. The declines were especially sharp in Quantum’s core tape business.
Quantum CEO Jon Gacek said organizations seemed reluctant to buy, and Quantum refused to give deep discounts at the end of the quarter to close deals.
“The overall storage environment was particularly challenging at the end of the quarter as customers seemed to pull back on purchases,” Gacek said. “I’m not willing to panic and do dumb things [to make sales].”
The challenges are expected to continue this quarter. Quantum’s forecast for the quarter of between $120 million and $130 million is an increase from this quarter, but down from last year when the vendor had $135 million in the same quarter.
“I’m not confident that the market is going to bounce back,” Gacek said, “although I do feel better about how [this quarter] has started. It just feels choppy to me.”
Quantum did not give guidance for the entire fiscal year, which started last quarter.
Gacek sees a silver lining in what Quantum calls scale-out storage, which is primarily its StorNext file system, Lattus object storage and new Artico NAS archiving appliance. Those products grew 54 percent year-over-year to $28 million, a hike of around $10 million, due to strong sales in media and entertainment and video surveillance. But disk backup revenue was flat at $17.3 million and tape revenue fell about $28 million.
Because the tape revenue is still the majority of Quantum’s business, it can’t grow overall by increasing scale-out if tape keeps slipping. The previous quarter, Quantum beat expectations when tape sales stabilized and DXi disk backup and scale-out storage increased.
“I like how we’re positioned in scale-out, and there is a lot of opportunity there,” Gacek said. “I would like to be more confident in the data center stuff.”
EMC, which has downplayed the significance of industry benchmarks, today published its first Storage Performance Council (SPC) block benchmarks for its unified storage systems.
Results for the VMAX 400K and VNX 8000 arrays were strong, but not dominant. The VMAX 400K achieved the highest SPC-2 (bandwidth) result for MB per second with 55,643.78 and finished second behind the Hewlett-Packard XP7 (based on Hitachi technology) for price/performance at $33.58 per SPC-2 MBps.
The VNX 8000 finished second in SPC-1 (transactional) performance and fourth in price per SPC1 IOPS.
The big surprise is that EMC published benchmark numbers at all. The vendor has long insisted that customers should do application performance testing instead of relying on SPC numbers, although it has published benchmarks for file storage.
In an EMC Pulse blog about the benchmark results, authors Jeff Boudreau and Fidelma Russo say they still favor application performance testing. “We continue to believe that application performance testing is the best predictor of real-world performance, especially for critical workloads,” they wrote.
However, customer requests have prompted EMC to benchmark.
“The world’s changing and our customers are changing,” said Jonathan Siegal, VP of marketing for EMC Core Techologies. “Customers have more choices, and there’s a lot of noise out there.”
The Pulse bloggers added: “By publishing these SPC results, customers can use these standard assessments to help simplify their initial high-performance storage evaluations and eliminate much of the ‘noise’ in their screening process.”
The VMAX 400K system EMC tested was an all-flash model with eight 512 GB engines and 32 200 GB solid-state drives (SSDs) per engine for 4 TB of global memory and 264 SSDs. The storage connected to the host system with 128 Fibre Channel links.
The list price for the system tested was $4.9 million, although SPC used a discount price of $1.9 million for its price/performance calculation.
SPC-2 uses three workloads executed separately to gauge system performance. The Workloads are large file processing, large database query and video on demand. Besides the VMAX and HP XP7, other enterprise systems that have run the same benchmark include the Eternus DX8700 S2, IBM DS8870 and XIV Gen3, and Hitachi Data System VSP.
The VNX 8000’s 435,067.33 SPC-1 maximum IOPS finished second behind Huawei
Oceanstor Dorado 5100’s 600,000 IOPS rate. Its price/performance rating of $0.41/IOPS finished behind Infortrend EonStor DS 3024B ($0.17/IOPS), the second-place X-IO ISE 820 G3 and third-place Dell SC 4020. The SPC-1 test measures a system’s performance on an OLTP IO workload.
The VNX tested was also an all-flash system, with 40 100GB eMLC SSDs for 3.9 TB of capacity and four 300GB SAS disk drives for the VNX operating system and other system information.
The system had 256GB of DRAM and was connected with 32 8-GBps FC links to 16 host systems. The list price was $317,000, but a street price of $177,000 was used for price/IOPS.
FalconStor’s 2015 revenue has been paltry at the mid-way point of the year, but the vendor is crowing about new OEM deals for its FreeStor data services software.
FalconStor said it signed three OEM agreements and four service provider deals for FreeStor last quarter. The OEMs include storage vendors X-IO Technologies and Kaminario.
X-IO is using FreeStor for software services in its Iglu SAN platform launched Tuesday. X-IO did not advertise FalconStor as the OEM technology for Iglu, but did not deny it and gave FalconStor permission to disclose the deal.
All-flash vendor Kaminario is expected to use FreeStor to add more storage management features to its K-2 arrays, which were originally designed with performance in mind more than manageability. Kaminario added thin provisioning, inline deduplication and compression, and encryption in 2014, but replication remains a roadmap feature. Replication is covered by FreeStor, which was originally developed for all-flash vendor Violin Memory and is used with Violin Concerto arrays.
FalconStor CEO Gary Quinn said cloud service providers Egenera and Telefonica have signed on to resell FreeStor. He said FalconStor now has six OEMs for FreeStor along with four service providers.
FalconStor can use the sales help. Revenue for last quarter was $9.6 million, down from $11.3 million last year and $10.1 million in the first quarter of this year. The vendor lost $2.2 million, down from $3 million last year. With $18.8 million in cash, FalconStor needs to reverse historical trends and start turning a profit soon.
Quinn is trying to spin FreeStor and its partnerships as a turning point for the company.
He began FalconStor’s earnings call Wednesday by saying, “The first half of 2015 is a tale of two cities story. It’s the beginning of a new era and winding down of the past.”
He ended with: “Past performance and future opportunities are on the opposite ends of the spectrum” for FalconStor.
None of FalconStor’s OEM partners are market leaders, however, so it will need a lot more deals and enterprise customers to turn things around.
Few people argue that information is important. The value of information varies and changes over time but is the most critical resource for most organizations.
Yet, we see storage products where the importance of storing, accessing, and managing information is not addressed effectively or is seemingly trivialized.
There is complexity in managing information, especially as value changes. Requirements must be met when storing and managing information. It must be available, 100 percent valid, secure from an access standpoint, and protected from disasters, hardware failures and human errors.
People in IT often forget about these requirements, as do vendors. We see storage products that only emphasize moving data to execute a program against it, assuming there is no real issue regarding storage beyond that. They ignore that with the high value of information, the residency of information is with storage and is transient for servers and networks. The stewardship of information required for the processing and analysis is the responsibility of where the data is stored.
Another important consideration is that information is stored for a long time, typically for decades. The real concerns are about storing, managing, and administering the information over that period. The infrastructure will change over that time. Think of how many servers will be replaced over the information’s lifespan. This is also the area where major costs are incurred. The costs for storing and managing information over its lifespan can be far more significant than other technology costs.
Systems and solutions must make allowances for the cost of storing information for the lifespan of the data. When solutions do not address that concern, someone (the customer who understands the value of the information) must incur greater cost and effort to add those capabilities. To not do so adds an unacceptable measure of risk. The priority of information must be covered effectively when evaluating and making decisions about storage.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).