Storage Soup

August 26, 2016  10:36 AM

Pure Storage: We do flash better than those geezers

Dave Raffo Dave Raffo Profile: Dave Raffo
Pure Storage

When it comes to all-flash storage, Pure Storage claims experience is no match for youth.

Pure, born in 2011 as a flash-only vendor, is far outgrowing the storage establishment, a collection of vendors that Pure CEO Scott Dietzen calls “the 20-year-olds.”

The toddler Pure Thursday night reported $163 million in revenue for last quarter, a 93% increase over last year and above the high end of its forecast. Pure forecasted revenue of between $187 million and $195 million for this quarter. That would be 45% year-over-year growth if it comes in at the midpoint of the guidance.

Dietzen said his vendor is outgrowing the market by a long shot because it built its systems for flash from the start.

“Putting SSDs into storage designed more than 20 years ago simply cannot deliver on the demands of modern IT,” Dietzen said on Pure Storage’s earnings call.

Dietzen said Pure’s FlashArray is replacing “complex, services-intensive storage, designed for mainframes or client-server” systems. He called FlashArray smart storage, “that offers the simplicity, automation, resiliency and customer-friendly business model essential for cloud IT. Smart Storage allows customers to keep more data for far less costs, protected with strong security and delivers the bandwidth necessary to mine that data for new analytic insights or even machine learning.”

Dietzen’s critique of the competitors ignores that EMC XtremIO – the all-flash market leader – and IBM FlashSystem were acquired from companies who designed them from the ground up for flash. EMC and IBM also have modified their older storage platforms to work with flash, as have NetApp and Hewlett Packard Enterprise. Those four vendors and Pure share the leaders quadrant of Gartner’s Magic Quadrant for all-flash arrays released this week.

Pure Storage certainly has the results to back up Dietzen’s claims. As he pointed out, Pure nearly doubled its revenues year-over-year last quarter “at a time when many of our competitors are shrinking.”

He expects the growth to accelerate when Pure Storage’s FlashBlade object and file storage system hits the market. FlashBlade is in limited availability, and while some early beta testers have purchased the array, it will not likely generate significant sales before 2017. But in combination with the SAN-based FlashArray, Dietzen predicted FlashBlade will make Pure a storage powerhouse.

The young company is still going through growing pains, however, particularly on its bottom line. For all of its sales success, Pure loses tens of millions of dollars every quarter. It dropped $63.8 million last quarter as it increased sales and marketing spending to fuel its growth. That was an improvement over the $59.6 million loss from the same quarter last year, and Pure executives predict they will be “cash flow positive” by the second half of 2017. Pure finished last quarter with $570 million in cash and investments.

“With FlashBlade ramping and exciting FlashArray innovations yet to come, we are only getting started,” Dietzen said.

August 24, 2016  1:22 PM

Nimble says customers quick to embrace all-flash

Dave Raffo Dave Raffo Profile: Dave Raffo

Nimble Storage, which was relatively late selling all-flash arrays, is trying to make up for lost time since launching its Predictive Flash platform in February. The vendor received a jolt from all-flash sales but not enough to stop or even slow its significant losses.

Nimble said it added 133 all-flash customers in its first full quarter selling all-flash arrays. CEO Suresh Vasudevan said 79 of the all-flash customers were new to Nimble. Overall, 23% of bookings in the quarter were all-flash systems.

Nimble Tuesday reported revenue of $97.1 million. That beat the high end of its forecast by $1.1 million, and was up from $80.1 million a year ago.

However, the sales came at a cost as Nimble continues to increase investments in sales and marketing to compete with larger vendors. It lost $39.3 million in the quarter – up from $29.5 million a year ago – and executives would not predict when the company will become profitable. Nimble has $194.2 million in cash and investments as a cushion, which gives it a little time to stop the bleeding.

Nimble projected $100 million to $103 million in revenue this quarter, which at the midpoint is a 26% increase over last year.

Nimble added around 700 customers in the quarter. Most of them bought hybrid arrays, but Vasudevan said the selling price for all-flash often doubled that of hybrid systems. The vendor reported bookings from large enterprises (deals over $250,000) grew 37% over last year.

Nimble is looking to broaden its all-flash market with its new Predictive Flash AF1000 entry level system with a list price beginning at $40,000.

“We believe that complex storage solutions from legacy vendors are no longer competitive creating a significant share shift opportunity,” Vasudevan said during Nimble’s earnings call. “At the same time, we believe that younger storage companies do not have the breadth of functionality of our (all-flash) platform. Consequently, we believe that we have the opportunity to emerge as a leading next generation infrastructure provider.”

He said Nimble’s priorities at the start of 2016 were to invest in building a strong pipeline, drive faster growth in large enterprise and cloud service provider markets and drive traction in sales of all-flash arrays. He said meeting those goals will help Nimble’s long-term financial position. The flash market is as competitive as it is potentially lucrative, Vasudevan said.

“The all-flash array market growth is stronger than what analysts had projected for this point, and it’s continued to remain strong,” he said. “Now that said, every single all-flash win that we’ve had is one where we’ve had to take on two, three other all-flash array vendors so there are no uncontested deals.”

Vasudevan said Nimble’s main all-flash competitors are EMC, NetApp and Pure Storage.

August 19, 2016  2:00 PM

EMC: Commercial drone storage market primed for takeoff

Garry Kranz Garry Kranz Profile: Garry Kranz

Commercial use of unmanned aerial vehicles (UAVs), or drones, is just starting to emerge. Drone storage, on the other hand, isn’t getting as much attention.

EMC wants to gain a toehold in the market for drone-related scalable storage, particularly its scale-out Isilon NAS and object-based Elastic Cloud Storage.  Equipped with high-resolution cameras and sensor technologies, a single drone flight can capture terabytes of data.

Due to the vehicles’ modest size, however, internal drone storage capacity is limited, said Josh Bernstein, a vice president at EMC’s Emerging Technology division.

“We see drones expanding data lakes into data oceans,” Bernstein said.

A report in May by consulting firm PricewaterhouseCoopers pegged the global drone storage market at $127 billion.

Industries such as agriculture, construction, energy and government use drones to generate vast sums of data. Drone technologies capture images with exponentially greater detail.  Farmers deploy aerial drone imagery to design more efficient watering plans or pinpoint potential crop threats. The images help building companies improve finite element modeling, fracture analysis and thermal analysis.

Aside from managing the large files, companies need the capability to analyze and mine the data to derive value from it. Bernstein said EMC’s interest in drone storage directly ties to its initiatives in open source software-defined storage.

“To be a good drone pilot, you first have to be a good pilot. You also have to be able to successfully consume open source software. It turns out the people that have (those) skills often are also our customers.”

Bernstein said EMC customer EagleView Technologies, based  in Bothell, Wash., uses drones to provide 3D aerial roofing models across a range of industries.  EagleView each year adds tens of millions of images to its big data drone storage repository based on EMC Isilon NL series scale-out storage, VCE Vblock 300 converged architecture and EMC XtremIO all-flash storage arrays.

This month, the U.S. government gave Google’s Alphabet Inc. X subsidiary approval to test delivery drones in an effort to formalize safety regulations.

August 19, 2016  7:47 AM

NetApp sells more flash, spends less

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp made great progress last quarter with its sales of all-flash storage, Clustered Data OnTap (CDOT) and cost-cutting measures.

NetApp Wednesday reported better revenue and income results than expected, although its sales continued to slide on a year-over-year comparison.

Overall revenue of $1.29 billion was down around three percent from last year but within NetApp’s guidance and higher than financial analysts expected. Product revenue of $660 million dropped one percent from last year, after a string of steeper declines.

NetApp’s income of $64 million reversed a loss of $30 million in the same quarter last year, with the turnaround achieved mainly by trimming operating expenses 13% to $652 million.

Despite beating expectations, NetApp remains a ways from reversing year-over-year revenue declines. Its revenue forecast of between $1.265 billion and $1.415 billion for this quarter was more than expected, but even at the high end would be a year-over-year decrease.

“We’re clearly making progress, but still have work to do as we operate in the low growth IT spending environment,” NetApp CEO George Kurian said. “We are controlling what we can and are increasingly confident in our ability to execute as we streamline the business and pivot to the growth areas of the market.”

NetApp reported all-flash revenue grew approximately 385% from last year, mainly from All-Flash FAS sales. All-flash array sales came to around $194 million, around 30% of product revenue.

“Customers are replacing hard disk installations with flash, making flash the de facto standard for new on-premise deployments,” Kurian said.

He added that sales from SolidFire, the all-flash vendor NetApp acquired in February, remained “immaterial.” NetApp also sells an EF Series of all-flash arrays for high performance computing, but that made up only a small amount of the vendor’s overall flash sales.

Kurian said CDOT sales increased 35% from last year, with 82% of FAS arrays shipped last quarter including CDOT compared to 65% last year. CDOT is now running on 32% of NetApp’s installed base. NetApp struggled early on with CDOT conversions because it required a disruptive upgrade from its previous version of OnTap, but CDOT installations have grown steadily over the last nine months or so.

There is a possibility that NetApp’s spending cuts could hurt its long-term success, though. While the company returned $228 million to shareholders through share repurchases and a cash dividend last quarter, it cut year-over-year research and development for the third straight quarter. NetApp spent $192 million on R&D, down from $218 million a year ago.

With the storage industry more competitive than ever and newer technologies potentially more disruptive, lack of R&D spending could put NetApp at a disadvantage. It was slow moving into the all-flash array market, and still hasn’t come up with a product in the fast-growing hyper-converged market.

Krista Macomber, senior analyst for Technology Business Research (TBR), predicted the cut in R&D will make it tougher for NetApp to keep up with new developments and grow revenue.

“This is a threat, as agile and cutting-edge innovation increasingly influences storage vendors’ ability to differentiate,” Macomber wrote in a research note on NetApp. “Long-standing architectures and purchase models become massively disrupted by customers’ need to serve rising, data-centric demands from lines of business with greater efficiency and agility. As a result, TBR believes it will become more challenging for NetApp to sustain bottom-line improvements as it seeks to remain aligned with customers’ evolving workload requirements.”

August 18, 2016  6:09 PM

Quobyte updates “Google-like” storage software

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Block storage, File storage, NAS

The “Google-like” software storage system that Berlin-based Quobyte introduced last year is getting an update.

With its new Quobyte 1.3 release, the German startup added space-saving erasure coding to protect file data, boosted the performance of block storage, enhanced the product’s management capabilities, and extended support from Linux to Windows.

The Quobyte software runs on commodity server hardware, uses a highly scalable POSIX-compliant parallel file system, and supports file, block and object storage. CTO and co-founder Felix Hupfeld compared the system to technology in use at Google, where he and co-founder Björn Kolbeck once worked as engineers in storage infrastructure.

Hupfeld said “Google-like storage” works with all workloads and cluster sizes and runs on any infrastructure, with only a few people needed for maintenance because the system is highly automated and fault tolerant.

Quobyte’s newly added erasure coding allows applications to directly write erasure-coded files, making the system useful for archival and primary storage, according to Hupfeld.

“We make erasure coding a primary storage access method,” he said. “We’re not recoding data in the sense that we write everything replicated and then recode.”

Hupfeld said modern CPUs are fast enough to render the resource impact of erasure coding irrelevant. He said there’s also no significant performance impact with file-based sequential workloads, such as media assets and engineering and scientific data.

“Where erasure coding is very efficient is when you write a file from beginning to end and don’t do in-place updates, like a virtual machine,” he said.

Hupfeld said performance becomes an issue with erasure coding for random I/O with block storage. He wrote in a blog post, “For a random write, the coding engine needs to read all data of the coding group first, recompute the coding parts, and then write out the modified original data along with the coding data.”

“For virtual machines, it would be a complete disaster if you used erasure coding because you’re recomputing data all the time,” he said in an interview.

Hupfeld advises against the use of erasure coding for virtual machines (VMs) and databases. He said customers should use replication with those block-based workloads.

Quobyte deployments range in capacity from less than 50 TB to petabytes. Customers include a German cloud service provider, an online video recording company, a container service provider and a U.S.-based university, according to Hupfeld. He estimated the average capacity at 200 TB and said users tend to look for alternatives to NetApp and Isilon at about 100 TB.

The Quobyte system makes three copies of data for full fault tolerance, so a customer with 100 TB of data would need 300 TB of storage. Using erasure coding with file data, the storage requirement could drop to 140 TB or 150 TB, depending on the encoding the customer chooses, Hupfeld said.

Quobyte’s “standard 8 + 3” erasure coding – or eight “data parts” and three “redundancy parts” – would enable the system to tolerate a failure of three storage drives.

“The good thing about erasure coding is it’s not just more efficient, it’s also more fault tolerant,” Hupfeld said. “You can lose more hard drives without losing data. And that just make it even a better candidate for archival data.”

Another file-centric enhancement with the 1.3 release is fully parallelized metadata operations. Quobyte rewrote one of the core parts of the database system to take advantage of modern multicore CPUs, Hupfeld said.

Quobyte also extended the product’s management capabilities with support for cross-interface access control lists (ACLs), integrated multi-tenancy, and hierarchical quota support for organizations with large-scale systems.

For block storage, Quobyte optimized the entire I/O path to improve performance and reduce latency to sub-milliseconds when system runs “on good hardware,” Hupfeld said.

The Quobyte software was in limited availability last year and became generally available in January. Hupfeld said the major focus in future product releases will be even more performance improvements.

“If you have more performance, the less hardware you need, the less power you need, and so on,” Hupfeld said. “Performance is very important.”

Since the product’s launch, Quobyte has added support for major container platforms, including Docker and Mesos. Hupfeld said a Quobyte volume driver would be available with the Kubernetes 1.4 release.

“What people are sometimes doing is attaching block storage devices to containers, but then this always gives these very tight couplings between containers and the data,” he said. “With Quobyte as a file system, you can give applications access to specific data like you used to in non-container environments.”

August 18, 2016  7:59 AM

Nutanix lines up partners to sell on Cisco UCS

Dave Raffo Dave Raffo Profile: Dave Raffo
Cisco UCS, Nutanix

Nutanix, which has OEM deals with server vendors Dell and Lenovo, is now selling its software on Cisco UCS servers through channel partners.

Nutanix today revealed it has independently validated Cisco UCS C-Series servers to run Nutanix hyper-converged software. Nutanix has forged a meet-in-the-channel agreement with Cisco resellers to sell Nutanix Prism and Acropolis software on Cisco UCS C220 and C240 rack-mount servers.

Cisco has its own HyperFlex hyper-converged system as well as partnerships with VMware, Simplivity and StorMagic that allow their hyper-converged software to be sold with UCS servers. Cisco is not actively involved in the Nutanix arrangement, which is between Nutanix and Cisco channel partners.

“This is strictly a Nutanix initiative that will benefit Cisco UCS customers,” said Greg Smith, Nutnanix director of technical marketing. “The testing of our software was a Nutanix-driven initiative with support from several large Cisco partners who have deep expertise with UCS. We have worked with Cisco in the past and we currently work with them to make sure our joint deployments fully support Cisco networking.”

Nutanix will not sell its software directly to UCS customers. All deals will go through channel partners who will do all the integration work. Nutanix supports Cisco’s Application Centric Infrastructure (ACI) architecture for deploying applications.

“We know there is demand to use UCS for hyper-converged services, and early efforts to use UCS for hyper-convergence has driven that demand to Nutanix,” Smith said.

Nutanix named Sirius Computer Solutions, HCLTechnologies, and SVA among the partners who will sell its software with UCS servers.

Dell has sold its XC Series based on Nutanix software since 2014. The future of that relationship had been questioned after Dell said it would acquire EMC, which sells its own hyper-converged appliances and owns hyper-converged software vendor VMware. But Dell and Nutanix in June announced a multi-year extension of their OEM deal.

Lenovo this year began selling its Converged HX Series appliances running Nutanix Prism and Acropolis.

Nutanix now makes its software available on three of the four major server platforms. The missing vendor is Hewlett Packard Enterprise, which sells hyper-converged products based on its own software.

“We have been on a journey to evolve our product from a single point product to a platform,” Smith said. “We want our software to be able to run on a variety of hardware configurations anywhere in the data center.”

August 16, 2016  10:57 AM

Rubrik grabs $61 million, aims for cloud

Dave Raffo Dave Raffo Profile: Dave Raffo

Converged data protection startup Rubrik turned to the cloud with a software release that allows customers to use Amazon AWS and Microsoft Azure as well as Rubrik’s appliances to store data.

Rubrik launched in April, 2015, with 2u appliances integrated with software that performs backup, deduplication, compression and version management. The new software release, Firefly, supports physical workload such as Microsoft SQL and Linux, and is available in a software-only version for remote and branch offices and public clouds.

Rubrik also closed a $61 million Series C funding round, bringing its total funding to $112 million

Firefly’s capabilities include search and analytics, archiving and copy data management.

Cohesity, which launched around the same time as Rubrik with a similar product, added cloud support last April. Rubrik CEO Bipul Sinha said Rubrik had its eye on the public cloud from the start. It referred to its appliance as a “cloud time machine,” from the start but the original version only had limited support for AWS and none for Azure.

“We started Rubrik with a focus on backup and recovery for VMware,” Sinha said. “But from day one we had a vision for cloud data management – backup, DR, orchestration, compliance, governance and more applications in the cloud.”

Firefly will be available as software only for remote offices. “Selling a full appliance for five to 15 VMs is not cost effective for customers,” Sinha said. “We are selling software-only for them, and they can replicate back into the data center to a Rubrik cluster, or to Amazon or Azure.”

Firefly provides a globally indexed namespace for data and uses zero-data cloning for instant access to data. If on-premise data is lost, customers can bring data back from the public cloud. Rubrik includes a single policy engine for automated orchestration, data permissions for data in the cloud and compliance reporting.

Khosla Ventures led the funding round, with previous investors Lightspeed Venture Partners and Greylock Partners participating. Sinha said the funding will be invested in sales and marketing and to support early customers.

“Our business is growing rapidly, he said, explaining how Rubrik grabbed a large funding score at a time when funding is hard to come by.

August 12, 2016  3:22 PM

Solid-state drives bulk up for capacity

Dave Raffo Dave Raffo Profile: Dave Raffo
samsung, Seagate

SANTA CLARA, California — Solid-state drives have been much faster than hard disk drives from the start, and now they’re dwarfing HDDs in capacity too.

At Flash Memory Summit this week, Seagate demonstrated a 60 TB 3.-5 inch SAS drive and Samsung said it would have a 32TB 2.5-inch SAS drive out in 2017 and a 100-plus TB SSDs by 2020.

The largest capacity enterprise drive out now is Samsung’s 16TB drive, which recently began showing up in all-flash arrays from NetApp and Hewlett-Packard Enterprise 3PAR arrays.

Samsung’s large drives are based its 512-Gb V-NAND chip. The vendor stacks 512 V-NAND chips in 16 layers to create a TB package, and 32 of those TB packages are combined into the 32TB SSDs. Samsung points out its 32TB will enable greater density than Seagate’s 60TB SSD because 24 2.5-inch drives can fit into the same space as 12 3.5-inch SSDs.

Seagate will own the density crown for a while if it gets its 60TB SSD before Samsung’s 60TB drive.

Seagate senior director of product management Kent Smith said he expects the 60TB drive to be available within a year. He said the drive will enable active-active archives. “Take a social media site with a lot of photos that people need to access quickly,” he said. “People hate waiting. This is for when you need lots of capacity but you need it to respond quickly.”

SSDs are already making 15,000 RPM HDDs scarce and relegating 10,000 RPM drives to servers. With the larger drives, SSDs can also move into traditional capacity workloads.

“Flash for bulk data becomes attractive in places where data center space is limited,” said DeepStorage consultant Howard Marks.

HDD giant Seagate is trying to show it is serious about SSDs. Its main spinning disk rival Western Digital has invested heavily in flash, including its $17 billion acquisition of SanDisk completed earlier this year. Seagate has been more active on server-side flash — it also launched new Nytro NVMe cards at FMS – but has been slow to embrace enterprise SSDs.

“It’s a surprise to me that Seagate hasn’t taken its dominance in hard drives and moved that to SSDs,” Objective Analysis analyst Jim Handy said during flash market update at FMS.

Samsung also had more products to talk about than big SSDs. The vendor said it expects to release a ultra-low latency Z-SSD and launch a 1 TB ball grid array (BGA) in-2017. Ultra thin BGAs are for notebooks and tablets, but the Z-SSD will be used for enterprise systems running applications such as real-time analysis. Samsung senior SSD product manager Ryan Smith said the first Z-SSD product will be 1TB with larger capacities planned.

One area Samsung is in no rush to be first in is quad-level cell (QLC) SSDs that store 4 bits per NAND cell. While other vendors said they would have QLC in 20017 or 2008, Samsung’s Smith said he sees no reason to hurry past triple-level cell (TLC) flash.

“We feel strongly that TLC is the right strategy,” he said. “What do you gain from QLC? We decided what we’re currently offering is the best choice.”

August 12, 2016  6:32 AM

Cloudian and AWS team up for on-premise cloud storage

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloudian and Amazon Web Services are now offering a new service that allows customers to use Cloudian HyperStore Hybrid Storage offering that stores data locally but leverages the S3 object storage.

AWS cloud storage will manage the usage and billing for the customers.

It targets applications and data that customers want to keep on-premises and operate in a hybrid cloud mode, said Paul Turner, Cloudian’s chief marketing officer. That kind of data is stored behind the organization’s firewall using the S3-compatible HyperStore software.

“What is different here is you can procure it from the Amazon marketplace. What we have done is implemented a service where you can go (to the AWS cloud storage) market place and sign up for the S3 service and do it locally,” Turner said.

“It’s in the customer data center and as the storage is consumed, you pay as you go and all the billing is done through Amazon S3,” he said. “It’s an OPEX spend which is unusual because up until now customer data center solutions are a CAPEX spend.”

The service is a hybrid cloud storage offering so customers can also use the HyperStore to tier data into the public cloud, either in S3 or Amazon Glacier. The HyperStore is an S3-compatible object storage product. The AWS cloud storage and HyperSore service currently is available in regions across the United States and EMEA.

“As we go forward we will roll it out in other regions,” Turner said.

The cost is three cents per gigabyte, based on the average usage.

“Customers have been asking for this and one thing Amazon does really well is respond to customers,” Turner said. “They will build what is needed.”

August 11, 2016  7:48 PM

Hubstor Microsoft public cloud archive goes cool, deep

Garry Kranz Garry Kranz Profile: Garry Kranz

Hubstor is fine-tuning the Microsoft Azure-based cloud archive platform it launched in July. The Ontario, Canada, startup introduced CoolSearch, which it bills as searchable Microsoft public cloud-integrated deep storage for enterprises that must retain inactive data indefinitely.

Hubstor’s standard self-service active archive lets users access and share archived data stored in Microsoft public cloud storage. Hubstor’s role-based access controls are integrated with Microsoft Active Directory for user authentication.

Rather than knowledge workers generally, CoolSearch is aimed at privileged user groups that control access permissions. The idea is to enable corporate legal or security teams to quickly spin up high-volume, low-cost searches of unstructured data related to compliance, defensible data deletion or e-discovery.

The CoolSearch data-aware archive is an isolated tenant that resides in Hubstor’s Azure cloud or in a customer’s Microsoft public cloud account.  CEO Geoff Bourgeois touts CoolSearch as an alternative to legacy approaches to searching discoverable storage.

“We’re responding to demand from organizations that don’t care about end user access. They just need searchable, fully managed cool storage for investigations, compliance, and litigation activity,” Bourgeois said.

After a query is run, CoolSearch deploys the results in Microsoft Blob Storage, which is Microsoft’s public cloud storage for infrequently accessed data.  Hubstor scales down a CoolSearch search cluster once indexing is finished. As with its dedicated cloud archive service, Hubstor CoolSearch is available as a monthly subscription, with pricing based on consumption of Microsoft public cloud resources.

Hubstor provided a pricing chart based on a 100 TB CoolSearch cluster with triple redundancy, 25 TB of content indexing and 3% egress. Depending on the search cluster and its activity level, the vendor claims searched indexing costs range from 5 cents and 9 cents per GB. The Microsoft public cloud CoolSearch tenant can be switched to an inactive state to reduce costs when it’s not in use.

The CoolSearch managed service includes automatic data mapping to orphan users. PST splitting and optional deep processing aids discovery of stored Microsoft Outlook PST files. Policy-based index scoping controls which data gets ingested in a full context indexed search.

CoolSearch discovery searches accept keywords, wildcards, proximity, Boolean, boosting, grouping, fuzziness, and regular expressions.  Searches restriction include location, tags, active or orphan users, groups or data owners. Options include full-content search or configured metadata fields.  Full-text searches use hit highlighting, paging, sorting and relevancy to rank results. CoolSearch also allows customized metadata searches.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: