The losses keep coming for Overland Storage, which reported its revenue dropped and losses grew last quarter and for the last fiscal year. And there is no tangible progress on its proposed merger with Tandberg Data that Overland CEO Eric Kelly said could stabilize the company.
Overland’s revenue for last quarter dropped to $12.1 million from $15.3 million last year. The company lost $5.4 million, doubling its losses for the same quarter last year.
Overland’s fiscal year ended June 30, and its revenue of $48 million for the year was down from $59.6 million the previous year. Its loss of $19.6 million for the year was up from a loss of $16.2 million the previous year. Overland finished the year with $8.8 million in cash.
Revenue from the SnapServer DX NAS platform increased more than 60% year-over-year to a still-paltry $6.2 million last quarter. Tape revenue dropped 14% year-over-year.
For the full year, tape revenue dropped to $15.4 million from $18.7 million and disk-based revenue fell to $9.6 million to $11 million.
During Overland’s previous earnings call in May, Kelly said he was discussing a merger with Tandberg Data with Cyrus Capital Partners, the company that owns Tandberg. Kelly said at the time that the Tandberg merger “could create a stable foundation for increased revenue and profitability” by combining Tandberg’s RDX removable disk and tape automation with Overland’s tape and disk technologies.
Overland executives never mentioned Tandberg on Wednesday’s earnings call until an analyst asked about the status of the talks.
“Unfortunately, right now we can’t discuss any of the activities that are going on with Tandberg,” Kelly answered. When the analyst asked if that meant the offer was terminated, Kelly said, “I’d love to answer that question. But unfortunately, I can’t address that. But when I can, I definitely will give you an update.”
Overland CFO Kurt Kalbfleisch said the vendor has a timetable to break-even, but won’t disclose it. He did say the company needs between $15 million and $16 million in revenue to get there. “I don’t believe we’re prepared to now give a specific quarter of when we believe that will take place,” he said.
Break-even will require not only a revenue increase, but decreases in spending. Kelly said Overland reduced operating expenses by more than $3.5 million through June and will save more money when the lease expires on its San Diego building.
Overland recently signed a partnership with Sphere 3D that Kelly said will lead to a product that will let customers access business applications from the cloud through mobile devices. Kelly was named chairman of Sphere 3D.
“We plan to deliver those legacy applications from the cloud or from an appliance, enabling access from any mobile device, creating a large new data management market opportunity for Overland,” Kelly said of the products that will come from the Sphere 3D partnership. “We plan to roll out a comprehensive product line over the next 12 months. Our market focus will be on key verticals where storage is obviously growing and mobility is important, such as healthcare, government, financial services and education.”
Whether that all happens soon enough to save Overland remains to be seen. The company will need more successful products a lot sooner than 12 months out to make it.
As noted in a previous Storage Soup post, when data storage companies launch products at industry gatherings like the recent VMworld tech fest, there’s often an interesting back story or related information. Vendors are typically willing to share some insights that provide context for the product lines or a look into product roadmaps and developing technologies.
With its acquisition of Arkeia and its backup software, WD instantly carved out a place for itself in the backup appliance market. WD Arkeia Network Backup is still available as a software product, as well as bundled on preconfigured appliances. WD introduced two low-end desktop backup appliances with capacities ranging from 4 TB to 16 TB raw to go along with its four rack-mounted backup appliances rolled out earlier this year. Any of these devices could be used in remote/branch offices to do local backups that can be replicated back to a central site, said Bill Evans, general manager of WD’s SMB business unit. Arkeia’s been around for a while, with its roots going all the way back to 1996; the company in its original and WD incarnations, targets mid-sized customers. Its software features Arkeia’s own data deduplication technology that the company calls Progressive Deduplication.
Syncsort has been around since the late sixties when it was founded as a mainframe software company, but a few years ago it remade itself to focus it data protection products exclusively on NetApp systems. That single-mindedness appears to have paid off, as Peter Eicher, senior product specialist, says they now have more than 500 customers as a result of their partnering with NetApp. Syncsort NSB pairs the company’s DPX data protection app with NetApp’s snapshotting capabilities. The result is a data protection platform that can work in physical, virtual and cloud environments. Eicher also noted that Syncsort is in the process of re-architecting the app’s foundation to make it easier to add more advanced features in the future.
Well-known for its memory products, Kingston Technologies is also has a full line of solid-state storage products for enterprises, smaller businesses and consumers. At VMworld, Kingston debuted its newest enterprise-class drive, the SSDNow E50, a lower-cost version of their E100 SSD. The E50 is a SATA 3.0-based 2.5-inch drive, offered in 100 GB, 240 GB and 480 GB versions. Cameron Crandall, senior technology manager, said that Kingston found that many customers were using their drives for read caching so they didn’t need the endurance of the higher end E100 product. Kingston originally got into the solid-state market by providing SSDs for desktop and laptop PCs, a business that’s still going strong. They have a number of projects in the works, including developing PCIe-base solid state devices, which they expect to introduce next year; by the end of 2014, they also expect to have a 2.5-ionch PCIe product (based on the NVMe standard). Considering their strong heritage in the DRAM market, they’re also keeping an eye on developments in DIMM-base flash storage.
Extending a business from surveillance systems to VDI appliances may seem like a neat trick, but Pivot3 has managed to pull it off. The company sells preconfigured vSTAC appliances that they say make deploying a virtual desktop environment a snap. The appliances include servers, storage and software—including VMware Horizon Suite for VDI. The newest model, the vSTAC R2S Appliance will ship in October; it can support between 117 and 154 desktops, according to Olivier Thierry, chief marketing officer. Thierry said that, on average, Pivot3 customers are using the appliances to virtualize 500 to 900 desktops, but he noted that vSTAC can scale to up to 12 nodes in a cluster. He also noted that interest in VDI is growing, with companies often opting to go the virtual route when faced with upgrading their Microsoft Windows PCs. Healthcare, state and local government and universities are among the verticals in which Pivot3 has seen the most activity to date.
While EMC’s belated VNX2 launch was no surprise today, the storage vendor did throw in a few unexpected twists. First, it revealed the first version of its ViPR software-defined storage platform would be generally available later this month. And the biggest surprise was it took the wraps off something called Project Nile, which the vendor has not discussed before.
EMC didn’t go that deep into Project Nile, which it described as an elastic private cloud storage platform.
In his EMC Pulse blog today, here is how EMC advanced software division preisdent Amitabh Srivastava described the key elements of Project Nile:
- Project Nile will be designed to deliver a streamlined and highly automated user experience from purchase thorough deployment and consumption.
- Project Nile will be engineered to enable “Click and Go” access to block, file and object storage depending on a customer’s workload needs. AND it possesses all the benefits of the Public Cloud by being designed for massive scale, geo-distribution, and elasticity.
- Project Nile is intended to support multiple standard APIs including Amazon Simple Storage Service (S3), OpenStack Swift, HDFS and EMC Atmos. This means that it will help developers more easily move applications between on-premise and Public Cloud environments without the need for costly and time-consuming application rewrites.
- Finally, Project Nile will be offered at a very aggressive price point, redefining the economics of on-premise Web-scale storage deployments.
Parts of this sound similar to what Srivastava laid out in a ViPR demo he gave at VMworld last week, particularly the click and go access to block, file and object storage.
In a video inserted into his blog, Srivastava also described Nile as “simplified on-premises Web storage without the constraints of public cloud storage.” So is Nile a version of ViPR designed for private clouds? It’s too early to tell, given what little detail we have on Nile. According to EMC, Nile won’t be generally available until 2014.
ViPR will be available this month. EMC says that is ahead of industry expectations, although those expectations were based on EMC’s public statements. Out of the gate, ViPR will support EMC VNX, VMAX, Isilon Vplex and RecoverPoint, and NetApp arrays.
ViPR breaks management into the control plane (infrastructure) through the ViPR Controller and the data plane through ViPR Data Services. Object storage services will be in the first release, with support for APIs from Amazon S3, OpenStack Swift and EMC Atmos.
“We’re separating the application from physical storage,” is how Srivastava described ViPR at VMworld. “The job of ViPR Controller is to automate, automate, automate. We’re giving customers the choice of what array to use.”
EMC said ViPR will cost around 2 cents per GB per month for customers with the Control Plane and Data Plane.
There are plenty of storage product announcements at a big IT show like the 2013 edition of VMworld, and our intrepid reporting staff does a great job of covering those product rollouts, whether they’re blockbuster announcements or enhancements on a more modest scale. But besides the new-stuff ballyhoo, it’s always interesting to catch up with the storage vendors to get a little of the “local color” related to new and upcoming products.
Nutanix is one of the pioneers of the “hyper-converged” model where storage, servers, network and hypervisors are tightly packed in a preconfigured package, and it claims to be gaining some traction in the market. Howard Ting, their vice president of worldwide marketing, said that based on the last quarter, they’re on track for an $80M annual run rate. Right now they have about 300 customers. Initially targeting the midmarket, they’re moving up the ladder to the enterprise, where they say they have 30 to 40 Global 2000 accounts. About 50% of their customers are using their gear for virtual desktops, with another 30% running virtual server environments on the Nutanix box. A typical configuration averages 15 to 20 nodes, but Nutanix said they’re currently developing a 1600-node installation for a customer. Noting that “all our intellectual property is software” and that their systems are built on “commodity components,” Ting said it’s conceivable that Nutanix might sell a software-only product in the future.
Bridging data centers and cloud storage services has been Nasuni’s forte, and they’ve recently enhanced their product and its place in the enterprise by beefing up the Nasuni Management Console that allows central management of multiple Nasuni filers. Customers use Nasuni filers in a variety of ways, said Karen Kiffney, manager of product marketing, but backup is likely the leading application to date, with linking ROBOs to the data center a close second. Many of their backup users have found that Nasuni sans backup app works fine for copying files to the cloud for basic backup. Nasuni doesn’t support “full” cloud-based DR as copies of data have to be downloaded from the cloud for recovery, but Kiffney said they’re working on virtual recovery in the cloud. On Nasuni’s roadmap, look for boosts in hardware performance and capacity as the company looks to better server larger companies.
Back in those dark old days, a lot of storage shops ponied up big bucks for storage resource management software (SRM) hoping the app suites would help them manage rapidly expanding storage operations. Unfortunately, a lot of those companies struggled with long and painful implementations only to find that a mere fraction of the suite’s features were useful for them. SRM suites earned the notorious sobriquet of “shelfware” as they often ended up stashed away and forgotten. ManageEngine, a subsidiary of online app provider Zoho Corp., seems to have avoided the pitfalls of past SRM attempts with a modular set of management applications that you can mix and match to assemble the appropriate control center for your environment, according to Raj Sabhlok, president. In addition to being able to select specific manage apps, you can also opt for either a perpetual license or a subscription. The storage management module is called OpStor, which you can get as a standalone app or with other apps to manage servers, networks, etc. A perpetual license fee for OpStor ranges from $3,995 (plus $799 annual maintenance) to $14,995 ($2,999) depending on the number of devices that will be supported; one-year subscriptions range from $1,495 to $6,995, with maintenance included. A try-before-you-buy program lets you download OpStor and try it out before making a purchase.
Micron, a key solid-state storage player, indicated that they’ll soon have an NVMe-compliant product. The new NVMe standard, based on PCIe, is expected to make managing server-based flash easier. Micron’s also been active in developing products around the latest solid-state buzz—3D Flash—along with companies like Samsung and Toshiba. A Micron spokesman said they announced their 3D work two years ago, but they’re approaching the technology cautiously as it’s not yet clear that 3D’s benefits will outweigh the added costs. The picture for triple-level cell (TLC) flash is similar; right now, Micron says that given the amount of development and engineering that’s gone into MLC it’s more economical than TLC at this point. Micron is also working on developing software to cluster server-based solid-state storage—an effort that’s combining internal efforts and partnering with third parties. They’re seeing growing demand for that kind of clustering.
Nexsan, Imation’s newest division by dint of its January 2013 acquisition, is rolling out new unified storage systems. Previously, Nexsan offered unified systems that paired iSCSI with NAS, but 60% to 65% of their customers wanted Fibre Channel connectivity, according to Mike Stolz, vice president of global marketing and support at Imation. While not all customers immediately take advantage of multiprotocol capabilities, it’s often seen as investment protection should storage needs change. As part of Imation, Nexsan can take advantage of its parent company’s extensive channel network based on its storage legacy as well as a deeper international reach. Although Nexsan systems were often used as backup targets, Stolz said their customers have gone well beyond that application, especially with Nexsan’s newer, higher performing systems. While Nexsan systems are being by cloud storage service providers, they haven’t seen much demand from end users for products that would integrate on-premises storage with those cloud storage services.
Three days after Violin Memory filed to go public, another all-flash startup Pure Storage today closed a whopping $150 million funding round.
Pure calls it the largest private funding round in the history of the enterprise storage industry, and I don’t think anyone would argue. (Dropbox raised $250 million in a 2011 round, but it’s not really an enterprise storage company). The vendor’s release also calls it a “pre-IPO” round but the massive funding may actually delay Pure’s filing to go public.
Pure CEO Scott Dietzen said the funding will help the company grow fast and work up to an initial public offering, and he expects the buildup to an IPO will help him recruit top talent.
In the meantime, the well-funded (a total of $245 million over five rounds) startup is duking it out with EMC and other large storage companies.
“We see EMC as our closest competitor, so we want to get big quick,” he said. “How can we compete with EMC unless we grow fast?”
Dietzen said he believes Pure has an 18-month technology lead over EMC’s XtremIO flash array, which is still in controlled release. Pure considers its data deduplication and compression a competitive edge over the established storage companies and the handful of all-flash startups.
Dietzen said Pure has been growing revenue about 50% every quarter since its started shipping its FlashArray in May 2012. He said he is in no hurry to go public or get profitable. The goal is to grow as fast as possible.
“It’s not our intent to purse profitability quickly,” he said. “We’re taking everything we derive from sales and investing it back into the business. We see this as a market land grab, and if we want to contend with the big guys we have to get big fast.”
He said the size of the funding round also sends a message to customers and prospects that Pure intends to stay around as an independent company.
“When we talk to customers they ask, ‘Aren’t you guys going to get sold at some point? Who’s going to buy you, we want to know who we’ll be dealing with?’” he said. “But it’s always been our intent to build a long-term independent company.”
According to market research firm Garnter, Pure had $20.6 million in revenue in 2012 when its products had been generally available for little more than half the year. That put it at 5.8% of the flash array market, behind Violin ($72 million), EMC ($61.2 million), IBM ($56 million), NetApp ($42.2 million), Hitachi Data Systems ($25.7 million) and Nimbus Data ($21.8 million).
Dietzen said the new money will enable Pure to ramp up sales and marketing to support international growth with a concentration on Asia, Europe and Latin America. He said Pure stands at more than 240 employees today, and expects to add at least 100 more before the end of 2013 and hit 600 in 2014.
He pledged to continue to spend on research and development, too.
“We’re holding up well relative to our competition, but relevant to our own aspirations, there is still work to be done,” he said.
Pure also added early investor and former Data Domain CEO Frank Slootman to the board. Former Data Domain chairman Aneel Bhusri is already on the Pure board. Bhusri and Slootman took dedupe pioneer Data Domain public in 2007 and two years later sold the company to EMC for $2.1 billion.
Well-funded flash array startup Violin Memory has filed papers to go public in hopes of raising $172.5 million by selling shares on the New York Stock Exchange.
Violin’s initial public offering (IPO) is expected in late September. The move has been expected – Violin executives have talked about going public for more than a year and the vendor is the early market leader in the fast-growing flash storage array market.
According to Voilin’s S-1 filing, the company had revenues of $11.4 million in 2010, $53.9 million in 2011 and $73.8 million in 2012 (the fiscal year ended Jan. 31, 2013), and it reported $51.3 million in the six months that ended July 31, 2013. Violin has yet to come close to a profit, though, with a loss of $109.1 million last year and $59.2 million over the last six months.
Violin’s revenue growth has not been a straight trajectory. Early sales were helped by a reseller deal with Hewlett-Packard (HP), which accounted for 65% of Violin’s revenue during 2011. When HP ended that deal in 2012, Violin’s revenue dropped from $21.9 million for the quarter that ended Jan. 31, 2012 to $10.2 million the following quarter. HP accounted for 37% of Violin sales last year. HP now sells its own storage arrays in all-flash configurations and has not qualified Violin’s flagship 6000 Series Flash Memory Arrays. The decline from HP sales likely caused Violin to delay its IPO filing.
Violin began selling its first all-flash array — the 3000 Series – in 2010 and followed with the 6000 Series in 2011. In March, Violin announced its Velocity PCIe flash cards to move into server-side flash.
According to research firm Gartner, Violin held an industry-high 19.4% of the flash array market in 2012, with storage giants EMC, NetApp, IBM and Hitachi Data Systems trailing. Those larger companies have all launched all-flash arrays in the past year, so Violin’s challenge as a public company will be to convince customers it can do flash better than the storage stalwarts.
Violin did not disclose the amount of its total venture and strategic funding, but it has raised at least $268 million since 2010 when current CEO Don Basile came to Violin from Fusion-io. According to Violin’s S-1 filing, its NAND supplier Toshiba owns more than 14% of the company.
Dave Donatelli, the man who set Hewlett-Packard’s storage strategy, is no longer in charge of HP’s storage, server and networking group.
HP CEO Meg Whitman Wednesday said COO Big Veghte will take over Donatelli’s post as executive vice president and general manager of the Enterprise Group. Donatelli remains with HP, and was given a special assignment to scout out possible acquisition targets.
Donatelli came to HP from EMC in 2009 and ran the enterprise group that includes storage, servers and networking. Donatelli spent $2.7 billion of HP money on networking company 3COM and another $2.35 billion on storage vendor 3PAR, whose systems have become the highlight of HP storage sales.
Apparently, Whitman is happy with these acquisitions. According to HP’s press release announcing the changes, Donatelli’s new role includes “identifying early-stage technologies as he did successfully with 3PAR and 3Com.”
I don’t know what HP’s definition of early stage is, but 3COM was 31-years-old and 3PAR was an 11-year-old public company when HP bought them in 2010.
Donatelli was reassigned because of poor sales in his group. “This is less about technology than it is about a number of other areas,” Whitman said during HP’s earnings conference call Wednesday. “I think we have the right technology. We have got to work on go-to-market.”
Those areas, she explained, include having “the right products targeted to the market segments that we choose to go after with the right cost structures” and simplifying HP’s “selling motion.”
3PAR storage systems – now called HP 3PAR StorServ – have sold well but HP’s older disk storage and tape portfolios have continually declined. HP storage revenue declined 10% year-over-year last quarter to $833 million. 3PAR arrays increased more than 10%, “but the decline in tape and other areas, we can do better than that,” CFO Cathy Lesjak said on the call.
Overall, the enterprise group’s revenue declined nine percent year-over-year, so HP’s enterprise problems go beyond storage. The storage strategy probably won’t change much, at least not immediately. Former 3PAR CEO David Scott continues in his role as senior VP and GM of Storage. Lesjak said the StoreOnce deduplication products that re-launched under Donatelli are also doing well. It’s the traditional storage – mainly the EVA arrays and tape – that is struggling. HP is offering customers a migration path from EVA to 3PAR systems to get them off the older arrays.
Donatelli was considered a candidate to replace Mark Hurd as HP CEO before Whitman got the job in 2011, and had been mentioned as a possible successor to Whitman. His new role indicates HP is looking to pick up its acquisition pace again, following the Autonomy fiasco.
“We will be back in the market as we think about acquisitions that can further our objectives,” Whitman said. “We are very mindful of the even that we just came off with Autonomy, so don’t worry about that. We are very focused and disciplined. But … there is no question that acquisitions are going to have to be a part of how we turn this company around.”
Storage devices such as hard disk drives may be seen as commodities, although storage systems that businesses depend on are clearly not commodities. Systems that store the most critical information inside data centers are complex and sophisticated, developed by companies that put their intellectual property into them. Storage systems are hardened over time with usage and support from engineering teams.
This does not mean the best storage systems appear complex. Far from it. It takes great effort to make a complex system responsible for storing and retrieving what is arguably the most valuable asset for a business to present itself as simple. Simple in this case means that it can be administered by someone who is not a specialist in the system and can operate in an optimal state automatically.
The underlying complexity in a storage system resides in several areas and there is variation in the more successful products in use. “Special” implementations to improve performance are the most often cited in promotional material. There are many different caching, tiering, and data management techniques used, and they continue to change as new technologies are added.
Special features such as snapshots, remote replication, data migration, and storage virtualization are complex to implement with the integrity and performance to work flawlessly every time. The basic functions of reliability, availability, and maintaining data integrity are expected but require extensive development and testing to handle the different conditions that can occur.
Controlling complexity can be difficult in storage systems. Features or capabilities added to an existing design that were not anticipated in the original architecture add complexity and make it increasingly difficult to make iterative refinements. A good example of this is the thin provisioning capability that was added to existing designs versus newer systems that were initially designed for thin provisioning.
When thin provisioning was added to established systems, it had limitations around mapping RAID groups and expanding by adding more disks. Many of these add-on implementations were subsequently updated with improved designs.
Maturity is crucial in a storage system. Certain situations occur only after wide usage over time. A system platform that has been used for an extended period will be more stable and customers will have greater confidence in it. The investment by the vendor in continuing a storage system with updates and long-term support is a distinguishing characteristic for systems that IT can depend upon. The critical nature of storing and protecting information is why the support and continuity is so important.
Devices may be seen as commodity but storage systems, given their importance in protecting information, are definitely not. When IT chooses a storage system, it should be one that includes the intellectual investments to meet their critical demands.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Object storage vendor Cleversafe this week raised $55 million in a series D funding round, bringing the total amount the company has raised to about $100 million since it was founded in 2004 and started pushing its product in 2007.
CEO John Morris said he expects the round to be the last that his company will need.
The new funding was led by investor New Enterprise Associates, and includes previous investors Motorola Solutions Venture Capital, Alsop Louie Partners, OCA Ventures and Presidio STX.
“We expect this to be the last round we’ll need before reaching profitability. We are focused on continuing to expand our technology, to roll it out to more and more customers to be able to fulfill the escalating market demand,” Morris said in an e-mail interview.
Morris said the Chicago-based company has more than 100 employees, with operations in London and Tokyo. Cleversafe has doubled the number of employees in the last year, and that growth will continue with the new money.
“We are hiring and looking to add talent in engineering, development, test-dev, sales and marketing,” Morris said. “As in the past, we continue to balance the acquisition of talent with the needs of the company, and how best to serve our growing customer base.”
Morris said he plans to expand into new vertical market segments, and continue to innovate its products. Cleversafe is among a group of object-based storage vendors, which includes Scality, Amplidata, Caringo and Exablox, looking to capitalize on the rapid growth of file data with the still-emerging storage technology designed to push the limits of RAID and handle petabytes if not exabytes of storage. The larger storage vendors are also looking to move in on the market. For instance, object-storage support will be the early focus of EMC’s ViPR software-defined storage.
Morris became Cleversafe CEO in May after spending four years at Juniper Networks as EVP of field operations and strategic alliances, three years with Pay By Touch as COO and then CEO, and 23 years with IBM. Founder Chris Gladwin gave up the CEO post and becomes vice chairman while continuing to set Cleversafe’s technical vision.
The eighth annual Flash Storage Summit in Santa Clara, Calif., was once again an interesting mix of solid-state builders and buyers. While much of the exhibit hall featured exhibitors showing off components and testing gear in hopes of snagging OEM deals, there were still plenty of vendors with end-user—or close to end-user—products to browse.
There weren’t a lot of enterprise product announcements that we hadn’t already covered on SearchSolidStateStorage.com, but the real “star” of the show is a new flash architecture that should be showing up in products soon. The technology is called 3D NAND flash. It was described in a keynote presentation given by Samsung’s Dr. E.S. Jung, executive vice president and general manager of their semiconductor R&D center. The concept of 3D flash is analogous to the perpendicular recording technique that’s been used to dramatically increase the capacity of hard disk drives.
The goal of the 3D architecture is to solve some of the problems that currently dog NAND flash as flash lithography gets denser. As the cells that store data on flash chips get closer and closer the likelihood of interference among cells becomes greater, which can affect the reliability and performance of the flash. With 3D, cells still reside next to each other, but expansion is achieved by stacking cells. The stacked cells can keep their distance from each to lessen the likelihood of interference while still increasing the chip’s capacity.
Jung said that Samsung has been able to create architectures with 24 layers of cells to create 128 Gb chips; he foresees using this technique to ultimately build 1 Tb chips. He said that 3D technology will yield 10X endurance, use about half the power of traditional side-by-side cell architectures, run about 20% faster and be 50% smaller.
There are a number of other companies working on 3D flash designs, including Toshiba. Joel Hagberg, , vice president of marketing for Toshiba’s storage products business unit, said there are number of different approaches to 3D flash that Toshiba is currently considering. In another keynote, Gil Lee of Applied Materials described some of the challenges of turning 3D flash into a product by making the transition into manufacturing. Lee noted that the manufacturing process wouldn’t require “cutting edge lithography” but would need other modifications to current processes.
Several flash chip vendors indicated that we could expect to see wider production of products based on 3D NAND flash technology in 2014.