When startups raise funding, contributors often include strategic investors who hope to benefit from the technology the new company develops. Storage startups usually turn to technology industry giants for strategic investments.
Cloud storage startup Symform is going in a different direction. Symform today said it received a $3 million strategic investment from Second Century Ventures (SCV), a venture capital fund of the National Association of Realtors (NAR).
The Seattle-based Symform’s cloud consists of local drive space contributed by its subscribers. So maybe you can say it’s in the realtor business because it sells real estate on customers’ storage.
But the real estate trade association didn’t fund Symform because it considers it a kindred spirit. NAR will make Symform subscriptions available to its 1.1 million members as a membership benefit. The realtors add cloud backup, and Symform adds to its subscription base.
Symform bills itself as a crowd-sourced cloud network. Organizations join the network by contributing extra local drive space in exchange for fast and secure backup. Besides providing backup, the network also synchronizes data from any device.
“We see ourselves as the Skype of data storage,” Symform CEO Matthew Schiltz said.
The startup’s technology encrypts data 256-bit AES. It breaks data into 64 MB blocks, divides each block into 64 1 MB fragments, and randomly distributes them across separate devices in the Symform cloud storage network. The architecture regenerates and redistributes missing fragments even when devices fail, according to Symform technical documents.
The technology is integrated with NAS storage devices from Synology, Netgear and QNAP. The company claims it has active users in 138 countries, up from 46 at the end of 2011. It also claims more than 7 billion fragments now are stored in the Symform cloud.
The $3 million is officially part of an $11 million Series B funding round Symform announced last April. It initially raised $1.5 million in seed money, so the company has raised total of $20 million in funding. Its previous investors include Longworth Venture Partners, OVP Venture Partners and WestRiver Capital.
Brocade completed its CEO search this week, hiring Lloyd Carney to replace Mike Klayko.
Klayko said last August that he would step down as soon as the board found a replacement, ending an eight-year tenure as Brocade’s CEO. During that time, Brocade acquired its main storage switch rival McData, outdueled Cisco for the top spot in storage networking revenue and spent $2.6 billion to get into the Ethernet market by acquiring Foundry Networks. But Klayko failed to attract a buyer for Brocade despite a great deal of speculation that the company was for sale several times over the past few years.
If you’re looking for hints on where Carney might take Brocade, two things about his resume stand out. First, he has little storage background and plenty of network experience. Second, he sold off two companies he ran – Xsigo Systems to Oracle last year and Micromuse to IBM in 2005.
Xsigo is the closest Carney has come to a storage company. Xsigo actually did I/O virtualization and was more of a networking play, but did work with storage gear. After Oracle bought Xsigo, it tried to recast its technology as software-defined networking. IBM acquired network management software vendor Micromuse for around $865 million, and Carney stayed with IBM for one year to run the Micromuse division.
He has also been COO at Juniper Networks, president of Nortel’s wireless internet division and a vice president at Bay Networks as well as CEO of his own angel investment firm. Carney obviously knows his way around Silicon Valley, which could help if Brocade puts itself up for sale again. If not, you can expect the vendor to continue its push to become an Ethernet network leader while holding on to the No. 1 Fibre Channel network spot for as long as that market remains lucrative.
There have been plenty of acquisition rumors around Brocade over the years, despite Klayko’s insistence in 2009 that the company was not for sale. Hewlett-Packard and Dell were believed to be considering buying Brocade before they acquired other networking companies, and there has also been talk of private investors buying Brocade.
A two-tier data archiving approach can help free primary storage capacity, reduce expenses from regular data protection, and meet compliance or business requirements for specific data.
A two-tier strategy divides archive data based on the probability of accessing that data. Archive data can be accessed online as one tier and a deep archive as another where access may be more involved.
An online archive has these characteristics:
- Data can be transparently and directly accessed by users or applications without other intervening processes.
- The time to access data is only nominally affected compared with primary storage, with no impact on users and applications.
- Typically NAS is used because the largest amount of archived data is in the form of files. There may also be support for objects depending on systems in use.
- The online archive has support for compliance requirements such as immutability with versioning of files, audit trails of access to data, and regular integrity checking of the data.
- The storage is much less expensive than primary storage.
- Only changed files are replicated for protection.
- Systems have built-in longevity with automatic, transparent migration to another platform. The migration is non-impacting to operations or staff.
A deep archive would have different characteristics than the online archive, including:
- Data moved to the deep archive is not expected to be needed again for any normal processing.
- Access from a deep archive may require greater time than applications than tolerate.
- Data may be stored in the form of objects with metadata about ownership and retention controls in order to permit massive scaling. The storage could be on local systems or in a cloud service.
- Longevity is handled automatically by the systems or service with transparent migrations.
- Compliance features are fully supported including digital data destruction.
- Protection is automatic with geographically separated replicated copies.
There is justification for a two-tier archive. You can gain large savings from moving data not expected to be used again to the lowest costing storage without compromising protection, integrity, or longevity. Economic models show the advantage and the compounding value over time as data is retained and more is added. Development of new systems and software that support object storage for very large scale of items in the archive and transparent migration for longevity are enabling wider usage. For all of these reasons, a two-tier archive is a good fit for a storage strategy.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
It’s the New Year and it’s time to organize things around the home. Throw out things that you no longer need. Take things that you probably won’t need but you’re not sure about, and put them in the attic. It’s a fairly simple exercise, but not done regularly. It happens now because the advent of a new year reminds us to do it.
This is similar to what storage administrators need to do this time of year. They need to organize their data, delete data that is no longer needed, and archive data that is unlikely to be used in regular processing. Maybe the archive target device should be called the “attic.” What they are really doing is making decisions about information. What is the value of the information, who owns it, what restrictions regarding compliance are there for that information, and can it be deleted?
There are several approaches to making decisions about the information. Some people don’t make decisions because there is no clear guidance and they might make the wrong choice. An interesting strategy from one storage administrator was to archive data to tape without migrating the data to new tape technology when the old generation of tape drives became obsolete. Eventually, there would be no drives left that could read those tapes and the administrator did not have to worry about the decision to delete the data.
Archiving has become a misused or at least misunderstood term. It really is about taking data that is not expected to be needed and moving it to another location. This is done for economic advantages, and potentially to meet regulatory requirements. Over time, the term has expanded to “active archive” and “deep archive.” Active archive is for data that is retained in the original context so an application or user can retrieve it without an intervening process. Deep archive is for data that is not expected to be needed, but may be. Both locations can support immutability, versioning, and other compliance requirements. There are advocates of using cloud-based storage for deep archive.
Managing information effectively includes archiving data by making intelligent decisions about what gets archived and where. The economic value of moving data off primary storage systems is great. An ongoing policy to move data periodically compounds that value. It should not take a trigger such as a new year to make a decision about organizing and moving unneeded stuff to the attic. It is a valuable IT process.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Nearly half of employees use online file sharing services even though their companies have a policy against it, according to a recent report conducted by cloud storage vendor Nasuni.
For its Special Report on Shadow IT in the Workplace, Nasuni surveyed 1,300 corporate IT users. Online file sharing and bring your own devices (BYOD) are gaining in popularity as employees use PCs, iPads, smartphones and tablets to access work-related files. And they use these devices to work from airports, homes, cafes and other locations outside of the corporate office.
Companies are starting to call this “shadow IT” because these services and devices often are not controlled by traditional IT departments. For many companies, it’s a growing security problem.
“We were surprised that the percentage of users is one out of five. That is not a trivial number. There is an incredible amount of usage and it was higher than we expected,” said Connor Fee, Nasuni’s director of marketing. “We talked to a lot of IT guys and they complained about Dropbox specifically.”
Other services mentioned include iCloud and Google Drive. The survey included 300 IT managers and 1,000 employees. The survey found that one in five employees uses Dropbox for work files, and half of employees who use file sharing services do not know if their companies have a policy against it.
The report also found that corporate executives are the worst offenders, with vice presidents and directors most likely to use Dropbox despite the security risks. About 58% of employees with personal smart phones or tablets access work files from those devices. The survey found that 54% of respondents work at organizations that do not allow access to file sharing.
“The fact that corporate leaders are the worst offenders tells me IT is failing to deliver on something that is needed,” Fee said. “Having and educating about a policy is not enough. It needs to be address beyond policies.”
IBM has announced plans to acquire partner StoredIQ, an Austin, Texas-based software company that specializes in classifying and managing big data, particularly for regulatory compliance and legal reasons.
The financial terms of the acquisition were not disclosed, but IBM expects to close on the deal around March of 2013.
StoredIQ has been an IBM partner for two years. IBM’s Tivoli data management and archive storage systems are already certified to work with StoredIQ applications. The technology dynamically identifies and classifies data that either needs to be retained or decommissioned, while also governing who has access to what data.
Big Blue plans to make StoredIQ’s technology a part of IBM’s Information Lifecycle Governance (ILG) suite that has the ability to set management policies, mine and assess what data is valuable and what data should be deleted. The company’s software does not overlap with IBM’s current products in this area, said Ken Bisconti, IBM’s vice president for Enterprise Content Management (ECM) software.
“They have the ability to dynamically identify data that is in place,” Bisconti said. “Most vendors require companies to move files to a separate repository before it’s classified and managed. StoredIQ dynamically collects that information wherever it resides. We have the ability to collect data in a repository but we did not have the ability to dynamically collect it.”
Bisconti said IBM will retain StoredIQ’s staff, which they consider essential intellectual property. The company has 50 employees to date. Also, more integration will be done with the ILG suite. StoredIQ’s software already works with the ILG portfolio. IBM’s policy governance technology manages the overall plan of identifying high-value data versus low-value data. Those instructions are sent to the StoredIQ engine, which executes the policy.
“For example, StoredIQ can do e-discovery for data that is in place if you have to respond to a request for discovery material,” Bisconti said. “Typically, companies have a difficult time to get to data that was not moved to a repository.”
StoredIQ was founded in 2001 under the name of DeepFile Corp., a data classification company that created metadata to manage different types of unstructured data. The company later changed its name to StoredIQ and focused on compliance and e-discovery to help companies figure out what data to keep and what to delete.
IBM’s information lifecycle governance business is part of its software group.
Exablox came out of stealth today by disclosing it has $22 million in funding, but CEO Doug Brockett is only dropping hints about its product until a full-blown rollout next spring.
Brockett and director of marketing Sean Derrington described the product as NAS-like, with flash as well as hard drive storage and managed at least partially through the cloud. They said it will share characteristics with cloud gateway products, but won’t be a direct competitor to cloud NAS vendors Nasuni, Ctera, Panzura, and TwinStrata.
“It’s more than a gateway or caching appliance for file serving,” Derrington said. “We want to enable customers to easily manage capacity and performance on-premise. If they choose to locate information outside of their primary data center, they have the flexibility to do it.”
Brockett said the key characteristic for Exablox storage will be the ability to scale without complexity.
“How do you manage a scale-out infrastructure spread across lots of locations,” he said. “We think the answer is having a management system that runs on the cloud itself instead of the device.”
For now, Brockett is focused on scaling out the Mountainview, Calif., company that so far consists of 28 employees and five contractors. “It’s me and Sean and a bunch of engineers,” Brockett said. “We need to build a go-to-market team.”
Brockett said the product is designed for companies with from 50 to 500 employees, and many are already running the system on a trial basis.
The funding comes from venture capital firms DCM, Norwest Venture Partners and U.S. Venture Partners. The $22 million consists of two rounds of funding, with the first dating to 2010. Brockett comes from SonicWall and Derrington worked in storage product and cloud product marketing for Symantec.
Competition in the all-flash market will grow intense in 2013, and startups Whiptail and SolidFire this week moved to strengthen their companies.
Whiptail closed a $31 million funding round today. Ignition Partners led the round, with BRE Ventures and Spring Mountain Capital participating, along with strategic investors SanDisk, an unnamed “Silicon Valley industry titan,” and debt financing from Silicon Valley Bank. Whiptail also hired a new CFO, Catherine Chandler.
SolidFire bolstered its senior executive team, adding RJ Weigel as president, John Hillyard as CFO and Tom Pitcher as VP/International.
Whiptail is the first startup to receive funding from SanDisk Ventures’ new $75 million fund for strategic investments. Alex Lam, director of SanDisk Ventures, said SanDisk picked Whiptail among the flash array vendors because its arrays can scale into tens of terabytes today with plans to drastically extend that.
“There’s a lot of noise in the industry from companies talking about the size of the round they raised or they got investments from Sequoia or somebody like that,” Lam said. “But you really want to look at the core technology. I look at the ability to take a terabyte and scale to petabytes without the customer having to purchases a new platform.”
Whiptail recently said its upcoming Infinity storage will scale to 360 TB in early 2013, and CEO Dan Crain said it will eventually go to petabyte scale
SanDisk and Whiptail had no previous relationship, but you can expect Whiptail to get its flash memory from SanDisk now. Lam said SanDisk will likely look to invest in server-side flash and flash software companies next.
“This is our first stake in the ground to show we’re serious,” he said. ”We view flash as disruptive in enterprise storage. We want to build up an ecosystem of enterprise flash technologies.”
Whiptail received a much smaller funding round – less than $10 million – in January 2012.
SolidFire stands out from other all-storage array vendors because it sells almost solely to cloud providers. Weigel fits with that strategy, because he ran sales and field operations at 3PAR in the early days when a good part of its customers were service providers such as Savvis and Terremark. Weigel said cloud providers hold great potential for SolidFire.
“So much of what’s in the cloud now is not the most critical apps,” he said. “People are putting test/dev and backup in the cloud, but customers have been waiting for quality of service and guaranteed SLAs for the most critical apps, such as Oracle. We’re going to deliver on that promise. Cloud service partners will be able to put together a business practice around our storage.”
But Weigel and SolidFire chief marketing officer Jay Prassl said they can see the day when SolidFire moves into the enterprise as well.
“Cloud providers are step one before taking the next step into large enterprises,” Prassl said. “We get a lot of calls today from enterprises, and we don’t hang up the phone. We’ll be announcing some of them next year. But companies that have a specific focus like SolidFire can do well in the cloud space.”
Weigel added: “Obviously, there will be a time and place where other markets make sense, but we are focused on the service provider cloud space today. It’s a great growth market for us.”
Object storage is a method of storing information that differs from the popular file storage and venerable block storage that are the most familiar in IT. It is another a type of storage where information and metadata are both stored, although the metadata may be stored with the actual information or separately.
We often see new object storage products these days with slightly different implementations. While many of these new object storage offerings are designed to solve specific problems for customers, all have the opportunity to be used across many different applications and environments.
The object storage of today is different than what some may have been familiar with in the past. Previously, a content address was used to identify data put into a storage system such as the EMC Centera. The new object storage, for the most part, is storing files with associated metadata frequently using HTTP and REST. The metadata can be different depending on the implementation or the application or system, and contains information such as data protection requirements, authorizations and controls for access, retention periods, regulatory controls, etc.
New object storage systems address storage challenges, including:
• Massive scaling to support petabytes and even exabytes of capacity with billions of objects.
• Hyper performance data transfer demands that go beyond the traditional storage systems used in IT today.
• Compliancy storage for meeting regulatory controls for data including security controls.
• Longevity of information storage where data can be stored and automatically transitioned to new technologies transparent to access and operational processes.
• Geographic dispersion of data for multiple site access and protection from disaster.
• Sharing of information on a global scale.
For the vendors offering new object storage systems, success with narrowly targeted usages can eventually spread to opportunities in enterprises. They address problems that already apply in the enterprise, but perhaps not at the scale that requires object storage yet.
Some of the vendors offering object storage today include:
Data Direct Networks Web Object Scaler (WOS)
HDS Hitachi Content Platform
Scality Ring Storage
Many of these vendors offer a file interface to their object storage as well as the native object API using HTTP and REST.
The types of object storage are developing so fast that the terminology is inconsistent between vendors. I attended the Next Generation Object Storage Summit recently that was convened by Greg Duplessie and The ExecEvent. This event was a great opportunity for vendors and analysts to discuss the technology, and how to describe it and understand the current market place. It was clear in the summit that the initial focus for new object storage should first be on the problems being solved today and then on the opportunities to move into more widespread usage.
This will be a developing area in the storage industry and Evaluator Group will develop a matrix to compare the different solutions.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
With major storage vendors in various stages of preparation to launch all-flash arrays in 2013, the startups already selling flash storage are working to stay a step ahead. For some, this means adding storage management and data protection, while others work on making systems redundant and still others try to reduce costs.
Whiptail’s plan for staying ahead of the game is to make its all-flash arrays the most scalable in the market. The startup is preparing to launch its Infinity architecture in the first quarter of next year. Infinity is an expansion of the vendor’s current Invicta platform, except for it scales to 30 nodes and 360 TB of flash compared to Invicta’s six nodes and 72 TB.
And that’s just the beginning, says Whiptail CEO Dan Crain. “Our largest tested configuration is 30 nodes,” he said. “We can probably go 10 times that, but we haven’t tested it.”
It’s unlikely that anybody will need – or want to pay for – 3.6 PB of flash in one system for a while, so Whiptail has time to test larger configurations. But Crain said his strategy is to have an architecture in place for his early customers to grow into as flash takes hold.
“Our basic message always has been organized around building a platform that folks can invest in and keep building onto,” he said. “People can take anything they’ve ever bought from us and organize it into Invicta.”
Whiptail claims it has achieved 2.1 million IOPS and 21.8 GB per second throughput in testing with a 15-node 180 TB set-up, and projects more than 4 million IOPS and 40 GBps with 30 nodes.
Infinity requires several pieces of technology, including version 5.0 of Whiptail’s Racerunner operating system, and enhancements to the array’s silicon storage routers.
Crain said he doesn’t expect flash to take over the storage world overnight. He predicts it will be a gradual process as early customers use it for high-performance applications and eventually move other critical data onto flash.
That’s why he wants to get an early customer base that will grow into Whiptail storage as it supports higher scale.
“We’ve always said we’re going to build into the market,” he said. “We never go out and tell everybody we’re going to take over the world because that’s not rational. Adoption of our technology is in its infancy.”
Crain said Whiptail already does things such as real-time error correction, clustering, auto-failover and asynchronous replication. Deduplication, a potentially key feature for SSD because of its limited capacity, remains a roadmap item.
“Over time we’ll have dedupe,” he said. “We’re very sensitive on performance latency, so we tend not to compete on cost per gig. Dedupe has benefits in general, but it’s still not yet widely deployed on primary storage.”