Storage Soup


September 1, 2014  4:52 PM

DataGravity’s smart storage arrays add meaning to bits and bytes

Rich Castagna Rich Castagna Profile: Rich Castagna
DataGravity, EqualLogic, Metadata, Primary storage, VMworld

At a time when most of the activity in the storage market seems to be focused on taking away as much intelligence as possible from physical storage devices, there’s one vendor bucking that trend. DataGravity, which recently emerged from stealth, rolled out a couple of new arrays that up the ante for data intelligence and management in a storage system.

DataGravity’s Discover Series comprises two unified arrays at this time: the DG2200 with 48 TB of combined flash and hard disk raw capacity and the DG2400 with twice that capacity. These are dual controller systems that support both block—iSCSI—and file (CIFS, SMB and NFS) storage. Both models are configured with what DataGravity refers to as 2U “computing” and 4U “storage” enclosures. DataGravity was founded by Paula Long, CEO and John Joseph, president; the two previously teamed up to create EqualLogic, the iSCSI storage pioneer that was acquired by Dell in 2007.

So far, these two boxes from DataGravity might sound like just about any other midsized array that mixes in a little solid state with hard disk storage—right? As unremarkable the hardware configurations might be, it’s the systems’ software that’s the real story here.

A while back in an editorial in Storage magazine (Data protection methods, define thyself), I suggested that stored data needed to carry more intelligence about itself. In the context of the editorial, that intelligence would be in the form of metadata that instructed data protection systems on how to handle that particular piece of data and what to do with.

DataGravity’s Discovery Series takes a different approach, but the results are pretty similar to the system I imagined. The differences are that DataGravity isn’t imagining it—it’s here now—and they pool the metadata the system gleans in a central repository as opposed to packing it in with the data itself.

The clever engineers at DataGravity realized that a key component of a high availability storage system—the secondary controller—spent much of its time sitting idle. They use the horsepower of the controller to index stored data and to parse that data into useful information about the file. DataGravity also does the usual stuff and keeps track of creation and modification events and who was responsible for those activities.

But the collected metadata reveals even more about the contents of the data, allowing searches for specific items such as “personally identifiable information“—or PII—that could include social security numbers, credit card numbers or email addresses. Information governance, whether internal corporate governance or compliance with legal regulations, is aided by the ability to do complex, sophisticated searches that can identify some of the interrelations among disparate files.

This deep dive into data can also help add another level of protection to the data itself. The DataGravity system can create DiscoveryPoints which keep track of all changes and activities related to a piece of data. DiscoveryPoints work like snapshots and allow recovery of previous versions of data if the primary copies become damaged or corrupted. One of the neat things about DiscoveryPoint is that data can be recovered at the file (even from a VM’s VMDK), VM, file system or LUN levels.

The Discovery Series is brand new, but it isn’t tough to envision it serving as a platform for archiving, data protection and disaster recovery systems. As DataGravity opens their APIs to other companies, those vendors will be able to hook into the boxes and integrate their capabilities on top of the data intelligence that DataGravity provides.

[DataGravity recently won a Best of VMworld 2014 Award for New Technology; the award was presented by TechTarget's SearchVirtualServer.com site.]

August 30, 2014  11:54 PM

Nimble claims record revenue, 663 new customers in fiscal Q2

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

Nimble Storage Inc. hit record numbers with $53.8 million in revenue and 663 new customers during its fiscal second quarter and closed 444 deals in excess of $100,000 for the 12-month period ending on July 31.

The San Jose, California-based storage vendor, which specializes in hybrid arrays that combine flash and hard disk drives, may have caused some of the major storage incumbents to prick up their ears with the release of its financial results for the 2015 fiscal second quarter, which ran through July. Nimble surpassed its own guidance and beat its Q2 2014 revenue by 89%.

Although Nimble posted a second-quarter net loss of $26 million, the company claimed it remains on track to break even and achieve profitability by Jan. 31, 2016, the end of its next fiscal year.

“That’s about six quarters away, and in the meantime, we’ve talked about investing for growth,” CFO Anup Singh said during the Nimble’s earnings call. Singh noted R&D investments in scale-out capabilities and its Adaptive Flash Platform, both of which launched in the first half of the year, and support for Fibre Channel enterprise storage networking, which is due in the fourth quarter.

Nimble also shipped a new CS700 Series high-end array and All-Flash Shelf in connection with the June release of the Adaptive Flash Platform, which combines its cache-accelerated sequential layout (CASL) file system and InfoSight cloud-based management and support system.

“We had a lot of excitement coming out of the major launch, and that led to record net new customer acquisitions and increased follow-on sales,” said Dan Leary, vice president of marketing at Nimble. “Our existing customers who were looking for more performance purchased additional systems. Scale-out benefited us because of their ability to cluster those systems together. And all of that really helped in delivering the really strong results that we had for the quarter.”

The new high-end CS700 factored into the largest deal ever for Nimble – a seven-figure transaction with a large government agency, according to Leary. The agency, which Leary declined to name, chose Nimble storage for its performance-sensitive Oracle databases, VMware server farm, mission-critical vertical applications and video repositories.

Nimble CEO Suresh Vasudevan said the customer’s CIO told him, “The single biggest factor that drove the deal was InfoSight.” InfoSight was able to troubleshoot problems with the agency’s network on two or three occasions, and the experience caused the CIO to think the Nimble system could support the organization’s environment better than products from some larger storage vendors, according to Vasudevan.

Nimble, which incorporated in Jan. 2008 and went public in Dec. 2013, has been trying to expand its customer base beyond the mid-sized companies that factored into the majority of its early sales. Leary said the bulk of the early customers were typically in the range of 250 to 2,500 employees with a storage footprint of 10 TB to 100 TB.

With the release of its financial results this week, Nimble noted that its installed base of large enterprises stood at 235 at the end of its 2015 fiscal second quarter, up from 130 on July 31, 2013. (The company defines a large enterprise as a Global 5000 company, according to Leary.) The current roster of 3,756 customers includes seven of the global top 50, 13 of the top 100 and 53 of the global top 500 enterprises, according to Nimble.

Cloud service providers represented another substantial area of growth. Nimble claimed to have 156 cloud service provider customers on July 31, 2013 and 341 by the end of last month. And the cloud service providers increase the amount they spend by 3.5 times over a two-year time frame after their initial purchases, according to Nimble. For Global 5000 customers, the multiplier is 3.3.

Yet, despite the flurry of activity with large customers, Nimble’s average selling price remained flat.

“We are growing the number of large deals substantially, which is moving the average sale price up, but at the same time, we’re also acquiring record numbers of new customers, and there’s a lot of smaller customers with that,” said Leary. “You blend those two together, and it’s kept our average selling price roughly flat for the past few quarters. And to us, that’s not a bad thing. We want to be doing both.”


August 29, 2014  2:50 PM

PernixData wants to cache your mission critical apps

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Caching software startup PernixData’s recent funding round was heavy in cash and even heavier in cachet.

The $35 million funding round was far from the richest of the year – Nutanix last week scored three times more – but attracted individual investments from industry heavyweights. Salesforce.com CEO Marc Benioff, Seagate CEO Steve Luczo and Silver Lake managing partner and founder Jim Davidson joined the round, signifying a good amount of industry buzz around PernixData. The startup’s previous investors includedl Virtual Instruments CEO (and former Symantec CEO and current Microsoft director) John Thompson and former Palo Alto Networks CEO Lane Bess.

PernixData will need that buzz as much as the money as it tries to convince enterprises to run its software on mission critical data. The company’s FVP software virtualizes and pools flash and RAM across servers to accelerate reads and writes to shared storage.

“Most startups try to establish a beachhead, and the beachhead is something like VDI that is not mission critical,” PernixData CTO and founder Satyam Vaghani said. “We go after mission critical applications. Most of our sales are an infrastructure play.”

PernixData claims 200 customers in little more than a year of being in the market.

Those include infrastructure as a service (IaaS) provider Virtustream, which uses FVP to help meet service level agreements (SLAs) for the highest performing of its four storage tiers.

“We put teeth behind our latency SLAs with financial penalties,” said Matt Theurer, Senior Vice President of Solutions Architecture at Virtustream

Theurer said Virtustream has flash in all of its SAN systems, a mix of NetApp, pre-Dell Compellent and Hitachi Data System arrays. But he relies on FVP to reduce latency and quickly expand flash pools as new customers come on board running enterprise applications.

“PernixData has the right approach of separating performance from capacity, and it scales linearly,” he said. “But the key for us was that it had write caching.”

Virtustream began using FVP in May of 2014 in its Amsterdam data center. It has since expanded to other data centers, and Theurer said it will soon be deployed in all of its data center sites, which include San Francisco, Vienna, Virginia, two in London and the SuperNAP colocation facility in Las Vegas.

PernixData’s Vaghani said the next version of the software will virtualize file storage to go with its current block virtualization.

Menlo Ventures led PernixData’s Series C funding round with previous investors Kleiner Perkins Caufield and Byers, Lightspeed Ventures, Mark Leslie, Lane Bess and Thompson participating. Its total funding is $62 million.


August 28, 2014  12:12 PM

Brocade sees SSDs changing storage landscape

Dave Raffo Dave Raffo Profile: Dave Raffo
Brocade, FCoE, SSD, Storage

Switch vendor Brocade is doubling down on its efforts to prepare for the emergence of solid state drives (SSDs) and flash in storage arrays.

Brocade earlier this year instituted a Solid State Ready program for flash and hybrid array vendors to test their systems with Brocade’s Fibre Channel switching. This week it expanded that program to include testing of Ethernet for NAS and iSCSI SANs. Fujitsu America Inc., Hitachi Data Systems, Hewlett-Packard, NetApp, Nimble Storage, Pure Storage, Saratoga Speed, SolidFire, Skyera, Tegile Systems and Violin Memory are part of the Ready program.

Brocade executives have pointed to SSDs as a driver of 16 Gbps FC SANs because SSDs are used in high performance use cases.
Jack Rondoni, Brocade’s VP of storage networking, said Solid State Ready will help vendors prepare for changes that SSDs will bring.

“People are thinking about their storage architecture differently” because of SSDs,” Rondoni said. “I believe it will be as disruptive as server virtualization.

“We’re doing more than others – short of buying a company, which we won’t do – to help transition to SSD technology.”

That was a poke at Brocade’s switch competitor Cisco for its 2013 acquisition of flash array vendor Whiptail.

Rondoni said Brocade added Ethernet to the Solid State Ready program because, while SSD “deployments have been clean for Fibre Channel, they can get dicey with Ethernet.”

While Brocade sells FC and Ethernet switches and sees both playing a role independently in storage , it is far less bullish than Cisco on Fibre Channel over Ethernet (FCoE).

“FCoE to a storage array is a dead technology,” Rondoni said. “It has value in a top of rack switch to the server, but not to storage.”


August 27, 2014  9:58 PM

Nutanix raises $140 million, claims $2 billion valuation

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

Nutanix Inc. bolstered its status as one of the hottest converged infrastructure companies with today’s announcement of a $140 million funding round – its largest to date – and a valuation claim of more than $2 billion.

The Series E infusion boosted the San Jose, California-based startup’s total to $312 million since its initial Series A funding round of $13.3 million in July 2010. Nutanix claimed investors valued the company at about $1 billion in January with the closing of its $101 million Series D financing. The value nearly doubled with the latest funding round, which was led by Boston-based Fidelity and Wellington, according to a source familiar with the financing.

Nutanix CEO Dheeraj Pandey blogged that the nearly five-year-old company “raised an IPO-like amount, at an IPO-like valuation, in a private round with institutional investors who typically buy at IPO time” but elected not to go public – yet.

“When you’re a public company, sometimes you have a lot of near-term pressures to deliver a certain number for a quarter, and everything gets scrutinized much more, including all the investment decisions that the company makes,” said Howard Ting, senior vice president of marketing at Nutanix.

Ting said making investments as a private company will allow the team to prepare a “really special” initial public offering (IPO), which he said “is not that far off,” probably in the next calendar year.

“We could decide to push that off,” said Ting. “With this funding, we actually have a lot of flexibility.”

Nutanix plans to use the latest round of funding to invest in sales, research and development, customer support and marketing of its software-driven converged infrastructure products, which are often referred to as “hyper-converged” for their tight integration of virtualization, compute and storage resources in a single box.

Ting said the Series E funding process was in the works for months, and when the closing happened on Tuesday, it made sense to make the announcement today in connection with the biggest conference in the virtualization and data center industry, VMworld.

VMware made a big splash at the conference on Monday with the launch of EVO:RAIL, which combines its compute, networking and storage resources into a hyper-converged infrastructure appliance. Hardware partners that have signed on to build the appliances – which will include VMware’s Virtual SAN (VSAN), vSphere and vCenter Log Insight – include Dell, EMC, Fujitsu, and Super Micro. None expect to ship products until close to year’s end.

The fourth quarter also happens to be the time frame when the XC Web-scale Converged Appliance, which combines Nutanix software and Dell hardware, is expected to become generally available. Nutanix announced the OEM agreement with Dell in June.

“You’re seeing a situation where especially these bigger companies are cooperating and competing with each other and going to market in ways that could be viewed as direct or at least indirect competitors. I think it’s just a natural evolution of the market,” said Jayson Noland, a managing director at Robert W. Baird & Co. Inc.

A Baird Equity Research report released last Friday listed Nutanix as the market leader in the hyper-converged market, with approximately 50% share, and VSAN as the most notable competitor given VMware’s market reach.

“There’s a lot of changes going on, and there’s going to be some large legacy IT companies that make this transition, and there’s going to be others that don’t. There are going to be small and new and shiny companies that never make it out of the gate, and there are going to be others that are wildly disruptive,” said Noland. “With a valuation like [$2 billion] and a $140 million capital raise, I would say investors are betting that Nutanix is going to be one of the big winners.”

Nutanix claims to have more than 800 customers, including 29 that have purchased more than $1 million in aggregate products and services. The list includes Airbus, Honda, ConocoPhillips, Toyota and the U.S. Navy.

Arun Chandrasekaran, a research director at Gartner Inc., said the new funding round and overall invested capital will help Nutanix to dispel some of the end-user concerns on vendor viability. He added that he expects more rapid global expansion on the heels of the funding and the OEM deal with Dell.

Pandey claimed his company’s ambition “is much bigger than what you know and see of this company today, hastily classified by so-called experts as a hyper-converged hardware vendor. We surprised those industry pundits by doing the Dell OEM deal, and all the ‘software-defined’ hypocrites were left scratching their heads on how to respond.”

The Nutanix CEO noted the increasingly heated competition in the market space, claiming in his blog post that the company is at war. “And to deal with the shenanigans of big companies, we don’t just need the technology muscle, but also some world-class sales, marketing, distribution, and packaging muscle,” Pandey wrote.


August 25, 2014  7:06 AM

Marvin/Mystic = EVO: RAIL

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage, VMware, vSAN

SAN FRANCISCO — VMware opened VMworld 2014 today by launching the product that had been known for months by its code names Projects Marvin and Mystic.

While EVO: RAIL won’t exactly rattle the Bay Area like the 6.0 earthquake that hit yesterday morning, it does shed light on a product that VMware kept a tight lid on since it began shipping its Virtual SAN (VSAN) storage software last March.

EVO: RAIL is a bundle of VSAN, vSphere, and vCenter Log Insight that VMware is selling to hardware vendor partners, allowing them to create hyper-converged appliances combining compute, storage and networking. The appliances will support a specific set of hardware specs, and the software gives them all a common look and feel.

VMware said Dell, EMC, Fujitsu, Inspur, Net One Systems Co. and SuperMicro have signed on as EVO parnters, but no products are expected to be generally available until late 2014. Those products will be competitive to hyper-converged appliances sold by Nutanix, Simplivity and a few others, although VSAN still lacks data deduplication, replication and other data management features that others have.

The appliances give VSAN customers another option for running the software, and the first that includes it pre-bundled on hardware. They can also install it on their own hardware or on pre-tested hardware Ready Nodes.

“I feel over time the hyper-converged model will rule the day,” said Mornay Van Der Walt, VMware’s VP of research and development.

You can find more on EVO: RAIL on SearchVirtualStorage.com, and you can keep up with all the news from the show at SearchServerVirtualization’s VMware 2014 home page.

And if you’re wondering about VMware parent company EMC’s’ take on EVO: RAIL, look here.


August 22, 2014  7:31 AM

HP takes StoreVirtual to the cloud

Dave Raffo Dave Raffo Profile: Dave Raffo
HP StoreVirtual, Storage

Hewlett-Packard, struggling to find a successful storage platform outside of its 3PAR arrays, is making its StoreVirtual Virtual Storage Appliance (VSA) more cloud-friendly.

HP today said it will sell StorVirtual VSA – a virtual appliance based on LeftHand iSCSI SAN technology – as an integrated option for the HP Helion OpenStack and Helion OpenStack Community Edition.

HP has also added a full set of RESTful APIs, an OpenStack Cinder interface and Linux KVM hypervisor support for StoreVirtual, which already supported VMware vSphere and Microsoft Hyper-V.

HP also said it will add space reclamation and multipathing to StoreVirtual but did not give a timeframe. Space reclamation automatically frees unused space when users delete VMs and files, and multipathing is designed to increase throughput and reduce latency.

Craig Nunes, HP storage VP of marketing, said StoreVirtual is used mostly in remote and branch offices, and by small companies and service providers “shifting from hardware to software strategies.”

Smaller all-flash array, virtual backup appliance too

HP is also adding a smaller capacity version of its entry level 3PAR all-flash array, the StoreServ7200. The All-Flash Starter Kit starts at $35,000 for eight 480 commercial MLC solid-state drives. The Starter Kit version of the two-node StoreServ 7200 will be available in late September. HP also has a four-node all-flash 3PAR StoreServ 7450 array.

On the backup front, HP is adding a 4 TB StoreOnce VSA and Hyper-V support to go with its previous VMware support. StoreOnce VSA, a virtual appliance version of HP’s StoreOnce deduplication disk backup targets, launched in 2013 with 10 TB licenses. The 4 TB version costs $1,400.

HP storage declines continue

HP storage revenue continued its long pattern of decline last quarter, coming in a $796 million – down four percent from last year. During the company’s earnings call Wednesday, CEO Meg Whitman said HP’s traditional storage – mainly its mid-range EVA and high-end XP arrays – declined 14 percent. The category that HP calls converged storage – mostly 3PAR, StoreVirtual and StoreOnce – grew nine percent over last year with 3PAR growing more than 10 percent.

Whitman said the storage market is shifting from high end to midrange systems, and “I believe this plays into a sweep spot for HP.”


August 21, 2014  10:21 AM

Nexenta eyes object, VSAN, orchestration

Dave Raffo Dave Raffo Profile: Dave Raffo
Object storage, Storage

ZFS storage veteran Nexenta is moving into the crowded object storage market.

Nexenta this week revealed plans for NexentaEdge as part of its strategy to expand beyond its NexentaStor software that runs on commodity hardware and takes advantage of open-source ZFS.

Unlike NexentaStor, the vendor has developed Edge from the ground with its proprietary IP along with “some ZFS DNA,” according to CEO Tarkan Maner. NexentaEdge will run on industry standard x86 servers and support iSCSI block storage, OpenStack Cinder and Swift, and Amazon S3 Object APIs. It will use global deduplication to reduce network bandwidth and cryptographic hashing for data integrity.

Maner points to global deduplication as the major differentiator for Edge. “A problem with object storage is it requires a lot of network bandwith,” he said. “We reduce that bandwidth with deduplication.

Nexenta will preview Edge at VMworld next week, and then begin an open beta program. Maner said he expects the software to become generally available by the end of 2014.

Object storage adoption is picking up steam, but Nexenta will have to make its 1.0 product mature quickly. Its competitors include EMC, Quantum, Cleversafe, Scality, Caringo, Exablox, and Amplidata.

IDC storage analyst Ashish Nadkarni said Nexenta faces stiff competition, but at least it not a newcomer to storage. Nexenta has been in the storage market since 2008.

“It’s a first generation product, and it’s going to get compared to what’s already in the market,” Nadkarni said. “And what is already in the market has been around at least two or three years and is probably ahead from functionality and maturity standpoints. Nexenta will have to play catch up. But on the positive side, they have an existing business and experience with storage customers. Having a lot of storage experience can help them come up to speed quickly.”

Nexenta also upgraded NexentaStor and added a VMware Virtual SAN (VSAN) addition to its NexentaConnect storage acceleration and management software this week. NextenaStor 4.1 now supports all-flash storage systems running on commodity hardware, providing optimization for low latency. NexentaConnect for VSAN adds SMB and file services to VSAN, which only supports block storage.

Maner said Nexenta will also add a Fusion product in 2015 that allows customers to manage and analyze Nexenta and other file, block and object storage systems through a common interface. The first version of NexentaFusion will include multi-tenant monitoring and real-time analytics with version 2.0 adding storage provisioning and orchestration, according to the vendor’s roadmap.


August 21, 2014  9:06 AM

Solid state requires different storage performance metrics

Randy Kerns Randy Kerns Profile: Randy Kerns
NAND Flash, Solid-state storage, Storage

The use of solid state technology in the form of NAND flash for storage systems changes the way we need to evaluate storage. While it brings power, space, and reliability advantages, the main reason for using solid state is performance – it accelerates applications.

Still, storage vendors often characterize their flash systems in ways more fit for spinning disk. The numbers usually quoted for storage systems are:

  • IOPS – The number of I/Os per second that a storage system can do.
  • Bandwidth – The measure of throughput for sustained data transfer as an MBps or GBps number.  Bandwidth measures are important when dealing with a high volume of data.  The data transfer rate is dependent on the size of the data transferred (block size) and the overhead processing between each block. Infrastructure options — including the fabric and the network reliability to ensure transfer completions without needed retries — are also considerations.
  • Latency – This represents the time required for an I/O operation to complete.

These need to be considered differently with an all-flash storage system.  First, IOPS is an aggregate number for a system but does not indicate what benefit an application or workload will achieve from a particular system.  As an aggregate number, it can be deceiving based on the size or scale of a system. The maximum number needs to be placed in context of the overall workload supported by the entire storage system.

Latency is the more important measure for solid state. Performance measurements go back to mainframe storage systems, when disk rotational latency and the ability to queue I/Os were individually measured. Response time was the important measure because it included latency and queue time. With a storage system design based on solid state technology, queuing is not the major factor as it is with electro-mechanical devices such as disk drives.

Many people are even mixing the terms response time and latency, and some vendors use them interchangeably.  Service time is latency and the data transfer time.  Latency with disk drives is seek time plus rotational latency before the data transfer completes.  However, vendors do not usually quote service time and use the term latency because it looks better comparatively than the latency of spinning disk.  Data transfer is limited by the interface and network connections so that is less of a product advantage to highlight.

Given that vendors quote latency and response time interchangeably without referring to service time, those two values must be used for comparative evaluations.  Important factors are how fast an I/O completes, and that it is a predictable amount. High variations in I/O completion create problems in management in addition to the effect on business.

For all-flash arrays, latency or response times are the important measures.  IOPS needs to be discounted. Because it is an aggregate measure, IOPS will not give you a good understanding of acceleration and the value that brings. How fast the I/O can be completed is the most important factor in the first level consideration for solid state storage.

The Evaluator Group has additional guidance on how to measure solid state storage performance.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


August 15, 2014  7:34 AM

Pivot3 secures $12 million in funding for hyper-converged expansion

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Storage

Hyper-converged infrastructure vendor Pivot3 secured another $12 million in funding this week, bringing its total funding to about $100 million in 10 years.

Pivot3, based in Austin, Texas, has been selling hyper-converged systems longer than better known (and better funded) competitors Nutanix and SimpliVity. But Pivot3 customizes its systems to go after targeted markets such as video surveillance and VDI while the others are more data center-centric.

Pivot3’s vSTAC family of hyper-converged systems all run on the same vSTAC 3 operating system but are packaged with applications that support specific verticals.

Pivot3 began selling to the video surveillance market and then added solutions for VMware-based Horizon virtual desktop deployments. CEO Ron Nash said the company has installed hyper-converged infrastructure solutions in more than 1,000 customer locations.

“The underlying technology is hyper-converged,” he said. “We take the same product and package it to solve a business problem for the business users. Most of our customers don’t know that is what they are using, particularly in the video surveillance market since they are not technologists. One of our customers is a bunch of hospitals in the United Kingdom. They use the hyper-converged infrastructure and VDI but the staff knows it as a production enhancer. They see it as a production tool.”

Nash said the new funding will be used to add vertical products, which will involve new partners.

“We’ll have more solutions for vertical markets and that is where the partners come in,” said Nash. “They have additional applications. The strategy for a company this size is to do what it needs to do to go public. We think this company has a broad enough technology that we can be an independent company as some point. We are on that path.”

The funding round was led by new investor S3 Ventures and current investors InterWest Partners and Mesirow Financial.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: