The idea that the human race has a global brain or a composite consciousness isn’t a new one. It’s at least as old as the Transcendentalist movements of the 1800’s, and the rise of computer technology has long sparked imagination about the possibilities for such universal connection made literal. The frequent recurrence of the idea among varying groups and individuals might even be considered evidence that such a superconsciousness exists. Creepy.
One of the most recent variations of this idea is currently making its way around the Internet, in the form of an essay by Kevin Kelly titled “Evidence of a Global Superorganism.” In it, Kelly draws on those concepts of collective consciousness, and postulates that the Internet/cloud is in itself a distributed, virtual, collective consciousness.
But more importantly, Kelly argues, the particular consciousness “emerging from the cloak of wires, radio waves, and electronic nodes wrapping the surface of our planet,” isn’t actually our own.
This megasupercomputer is the Cloud of all clouds, the largest possible inclusion of communicating chips. It is a vast machine of extraordinary dimensions. It is comprised of quadrillion chips, and consumes 5% of the planet’s electricity. It is not owned by any one corporation or nation (yet), nor is it really governed by humans at all. Several corporations run the larger sub clouds, and one of them, Google, dominates the user interface to the One Machine at the moment.
None of this is controversial. Seen from an abstract level there surely must be a very large collective virtual machine. But that is not what most people think of when they hear the term a “global superorganism.” That phrase suggests the sustained integrity of a living organism, or a defensible and defended boundary, or maybe a sense of self, or even conscious intelligence.
…It starts out forming a plain superorganism, than becomes autonomous, then smart, then conscious. The phases are soft, feathered, and blurred. My hunch is that the One Machine has advanced through levels I and II in the past decades and is presently entering level III.
This idea is familiar, and maybe a little bit frightening if you’ve read a lot of science fiction or seen The Matrix. Although a recent online survey found that people are less afraid of intelligent machines than “how humans might use the technology.”
Atrato Inc. made a splash earlier this year with its no-maintenance disk array (which was quickly followed to market by Xiotech’s somewhat similar ISE product) and has been relatively quiet since then. Around the time of the product launch, some industry watchers urged the startup to polish its messaging because there were inconsistencies between information on the company’s website and information provided to the press and analysts.
Now it appears Atrato is moving into a more marketing-intensive phase with the promotion of former executive vice president of sales and marketing Steve Visconti to president and CEO. Company founder and former CEO Dan McCormick “has relinquished a day-to-day operational role in his new position of Chairman of the Board,” according to an Atrato press release.
One can imagine a range of scenarios – some good, some not – that would lead to this move,” Data Mobility Group analyst Robin Harris said in an email this morning. “My sense is that this is a normal progression for a company moving from an engineering/evangelizing mindset to a marketing/selling mindset. I believe they and Xiotech have a unique value proposition. Now is the time for them to flog it hard.”
HP rolled out the StorageWorks SAN Virtualization Services Platform (SVSP) this morning, described in a press release as “a network-based storage platform that pools capacity across HP and non-HP storage hardware. It works together with the HP StorageWorks Modular Smart Array (MSA) and the HP StorageWorks Enterprise Virtual Array (EVA) as well as a number of third-party arrays.”
HP can now offer storage virtualization across its product line; it previously offered it on the XP systems thanks to an OEM deal with Hitachi. SVSP is based at least partly on LSI’s Storage Virtualization Manager (SVM) software. LSI said at SNW last month that it would be revealing a new partnership for SVM in about a month. An HP spokesperson told me that “we’ve worked with [LSI] on join development…[this is] not a straight OEM.” HP’s release says its SVSP features include online data migration, thin provisioning, and data replication.
We’ll have complete details on SVSP on our news page later today.
I’ve only been in the storage industry a few years, but I’m pretty sure a CEO puppet talking about open-source ponytails is a first for the IT world.
Overland Storage is running out of time in its attempt to transform itself from a tape vendor to a disk system vendor.
Overland has been losing money for years, and last quarter’s loss of $6.9 million left it with $5.4 million in cash. The vendor is betting its future on the Snap Server NAS business it acquired from Adaptec in June and disk backup products, but needs funding to stay alive. On its earnings conference call, CEO Vern LoForti said he hopes to raise $10 million in funding to keep the company going. But getting $10 million in today’s credit crunch isn’t so easy, and Overland has been trying to raise money for at least three months.
In an email to SearchStorage this week, LaForti indicated funding could be coming soon.
“As we indicated in our conference call, we are in discussion with a number of financing institutions concurrently,” he wrote. “We have various letters of intent in hand, and a number of the institutions are currently processing documents for our signature. As we get closer to closing, we will select the one that is most advantageous to Overland. We will publicly announce when the deal is closed. This is #1 on our priority list.”
It has to the top priority, if Overland is to survive. The 10-Q quarterly statement Overland filed with the SEC last week paints a bleak picture:
Possible funding alternatives we are exploring include bank or asset-based financing, equity or equity-based financing, including convertible debt, and factoring arrangements. … Management projects that our current cash on hand will be sufficient to allow us to continue our operations at current levels only into November 2008. [Emphasis added]. If we are unable to obtain additional funding, we will be forced to extend payment terms to vendors where possible, liquidate certain assets where possible, and/or to suspend or curtail certain of our planned operations. Any of these actions could harm our business, results of operations and future prospects. We anticipate we will need to raise approximately $10.0 million of cash to fund our operations through fiscal 2009 at planned levels.
Since it’s already November, the clock is ticking.
I’m a big fan of SAS. I’ve professed my undying love and devotion to it (at least until solid-state disk becomes just a little more affordable). So why on earth would I be writing about putting VMware’s ESX Server on SATA disks?
I was poking around on the Internet a few weeks back and came across a deal I couldn’t possibly refuse: a refurbished dual Opteron server with 4 GB of RAM and four hard drive bays (one with a 400 GB drive) with caddies and a decent warranty for $229. No, that’s not a typo!
The downside is that the server is all SATA, and ESX won’t install on some generic SATA controllers attached to motherboards, or even some add-in SATA cards, without some serious cajoling and at least hunting around on VMware’s support forum. These hacks are not officially (or even unofficially) supported. In fact, VMFS is explicitly NOT supported on local SATA disk(scroll to the bottom of page 22). There are exceptions to this and there is work on a SATA FAQ where you can get more info.
When my server arrived, I installed an Areca SATA RAID card in it, downloaded the beta Areca driver for ESX 3.5 update 1 and went about the install. Areca is my favorite SATA/SAS add-in card manufacturer. Their cards are stable and have proven to be the fastest cards in their class for the money.
Why not use LSI, you may ask, especially considering that their driver is in the ESX kernel? Well, I like speed, and I usually don’t make compromises when it comes to the performance of a storage subsystem unless it is absolutely, positively required due to some other dependency. (As a side note, I have LSI installed in my Vista x64 desktop due to a lack of x64 driver availability for my Areca card at the time of install. See my Windows on SAS blog post for more info.)
The Areca driver install went smoothly, and I currently have Exchange 2007, a Linux email server, a Windows 2003 domain controller, a Windows 2008 domain controller, a Windows XP desktop, and an OpenSolaris installation on the 1.2 TB of local SATA storage. So far, things look strong and stable with the driver in place. Normally, I try to stay away from needing a driver disk at install time to get to storage, as it makes recovery a nightmare if you are forced to use a live CD to repair your system. But this process was so stable that I may need to add exceptions to my prejudices. . . .
I’ve had acceptable performance (according to my subjective non-scientific benchmarking) with the relatively small number of VMs I have deployed on this server, so all in all, the storage subsystem is performing admirably.
This leads me to the question: Is SATA in ESX’s future? I wonder, seeing that Xen has native support for a myriad of storage subsystems, and Windows Hyper-V will install on SATA. I can’t see VMware being able to hold the line of local VMFS on SCSI/SAS only for much longer.
Couple this with storage Vmotion and it leads to more questions about the storage subsystems used for VMware and how it affects VMware’s already strained relationship with storage vendors. On the other side of that coin, why isn’t Microsoft or Citrix taking any heat for allowing the use of onboard SATA attached to cheap integrated controllers? Have the generic onboard storage controllers (Marvell and Silicon Image come to mind) reached the point where they are deemed capable of handling heavy server I/O loads? I’m sure they’ve advanced but I’m not convinced they can handle the storage I/O loads that a server with 20 VMs would generate.
But I have a bigger dilemma than all of this. How would I free the terabytes of storage I’d have trapped in these servers if I used eight 1 TB SATA drives? ESX isn’t exactly the platform of choice for a roll-your-own NAS. While it is technically possible for one to install an NFS server daemon on an ESX host, doing so may not be such a good idea. I’ve seen posts on how to compile and install the NFS module on an ESX host, but I haven’t seen any hard documentation as to the overall performance of the system when serving up VMFS via NFS and host/guest VMs simultaneously. You could also create a guest and turn that guest into an iSCSI or NFS target and point your ESX server to it, but that is also a bit kludgy.
In other words, just because I can run ESX on SATA disks local to the host doesn’t mean I should. I haven’t found a compelling business reason to have local VMFS on SATA in these times of inexpensive NAS. However, the pleasant side effect of this dilemma is that until I come up with a business case for using SATA as primary storage for an ESX server, I have a super cheap host that I can test various tweaks and changes to ESX before I roll them up into a QA or production environment. Maybe that’s reason enough to have one or two ESX deployments on SATA.
Rackable Systems added another prong to its storage line today, after revealing its intention to divest in the RapidScale clustered file system it bought with TeraScale in 2006 and partner with NetApp in recent months. Rackable’s latest approach is to build customized high-density server/storage systems for clients in the cloud.
The new CloudRack system is available in either half-rack (22U) or full-rack (44U) configurations and is built using 1U “tray” servers designed by Rackable. The servers hold up to eight 3.5-inch SAS or SATA II hard drives for a maximum of 352 TB in a 44U system. The servers are “rack-focused,” which means they use one fan and power supply system attached to the rack rather than each containing their own power and cooling components. This also leaves room for disks to be horizontally mounted so they can be popped out for service. All the wiring is on the front of the box as well. “No screwdrivers needed,” Rackable director of server products Saeed Atashie said.
While there are minimum and maximum configurations, each system will be custom-built depending on their particular customer requirements.
Hardware-wise, this box ties in with IDC’s recommendations to storage vendors about building systems for the cloud as discussed in the Enterprise Disk Storage Consumption Model report I posted about here yesterday. However, unlike other systems that have been marketed for the cloud, CloudRack doesn’t offer its own logical abstraction between the hardware and software elements for centralized management or its own clustered file system.
That said, there’s no shortage of storage software vendors in the market today selling products that convert groups of industry standard servers into something else, whether a clustered NAS system, a parallel NAS system or a CAS archive. Rackable is willing to work with customers to give CloudRack the “personality” they desire, but officials were cagey when it came to saying what specific applications or vendors will be supported. The question is not whether or not the technology would work, though, according to Atashie, but a matter of who would service and support the account, which would also be worked out “on a case-by-case basis.”
The product will become generally available today. When I asked about pricing, guess what the answer was…
When Foundry Networks delayed its shareholders meeting last week to vote on the proposed Brocade deal, there was speculation that A) Brocade wanted to renegotiate the price, or B) it had problems raising $400 million in high-yield bonds to help finance the deal.
Turns out the likely answer was C) both. It’s now clear Brocade did want to renegotiate, and the two companies said Wednesday night they agreed to reduce the price. The new price – $16.50 per share – comes to a total of $2.6 billion. That just happens to be $400 million less than the original purchase price of $19.25 or $3 billion. Now Brocade doesn’t have to come up with the extra financing.
Foundry shareholders are now scheduled to vote on the deal Nov. 7.
About three weeks ago, Sun general counsel Mike Dillon posted about the results of some pre-trial maneuvering between his company and NetApp over the patent-infringement suit brought by NetApp against Sun over ZFS. Dillon was jubilant over the Patent and Trade Office rejecting three of NetApp’s claims, and the trial court agreeing to pull one of those claims off the table for consideration in the suit. Not a decisive victory for Sun, but not necessarily good news for NetApp, which is seeking damages and an injunction against Sun’s distribution of ZFS and other products that it claims infringe on its patents.
NetApp co-founder Dave Hitz put up a brief response to Dillon’s comments this past Sunday on his blog, in which he attempts to introduce some nuance into the PTO aspect of this issue. “The Patent Office has issued a preliminary rejection of claims in 3 of our patents (out of 16),” Hitz writes. “Such a ruling is not unusual for patents being tried for the first time, and there are two ways to resolve the issue.” One of them is to wait for the PTO to make a ruling on each case, which Hitz calls “the slow way.”
The fast way would be to just proceed with the trial, which Hitz pushes for in his post. “Dillon mentioned issues with three patents, but NetApp currently has 16 WAFL patents that we believe apply to ZFS, with more on the way,” he wrote. “We believe that we have a strong case, and we want to get it resolved.”
He says Sun’s push for the slow way of resolving the dispute indicates the weakness of its position in the case: “To me, the best indicator of strength is to look at which party wants to get on with the case (the one with a strong position), and which party consistently drags its feet and tries to delay (the one with the weak position).”
Hitz’s post doesn’t offer much in the way of new raw information on the proceedings.
According to a new IDC Enterprise Disk Storage Consumption Model report released this week, transaction-intensive applications are giving way as the main segment of enterprise data to an expanded range of apps as well as a tendency to create more copies of data and records for business analytics including data mining and e-Discovery.
The report estimates that unstructured data in traditional data centers will eclipse the growth of transaction-based data that until recently has been the bulk of enterprise data processing. While transactional data is still projected to grow at a compound annual growth rate of 21.8%, it’s far outpaced by a 61.7% CAGR predicted for unstructured data in traditional data centers.
“In the very near future, the management and organization of file-based information will become the primary task for many storage administrators in corporate datacenters,” the report reads. “And this shift will have a significant impact on how companies assess storage solutions in terms of systems’ performance, operational efficiency, and file services intelligence.”
The IDC report also builds on research first highlighted in an IDC blog last week concerning the cloud. According to the report, the sharpest growth in storage capacity will come from new organizations described as “content depots.” IDC estimates storage consumption from these organizations will grow at a compound annual growth rate of 91.8% through 2012. Examples of content depots include the usual cloud suspects: Google, Amazon, Flickr, and YouTube.
These content depots have different IT requirements and infrastructures than traditional enterprise data centers. We’re seeing examples of these new infrastructures pop up in the market, including systems with logical abstraction between the hardware and software elements; the use of commodity servers as a hardware basis for storage platforms; and the use of clustered file systems.
Some in the industry have compared this “serverization” of storage to the transition between proprietary workstations and PCs in the 1980’s. But IDC analyst Rick Villars says this isn’t a zero-sum game. “This isn’t going to replace traditional IT,” he said. “Ninety-five percent of what people are developing and building in the storage industry today is irrelevant to what the cloud is building. You could take that as a negative, but it also translates into opportunity. These are new market spaces and new storage consumers that weren’t around five years ago.”
There’s been a lot of discussion lately about the role the cloud will play as the global economy softens. There is a difference of opinion between those who see a capital-strapped storage market as an even more conservative and risk-averse one and those who argue the opportunity to avoid capital expenditures will nudge traditional IT applications into the cloud. Still others point out the hurdles to cloud computing that remain, including network scalability and bandwidth constraints.
For example, when it comes to storage applications such as archiving, analyst reports from Forrester Research this year cited latency in accessing off-site archived messages and searching them for e-discovery as major barriers to adoption for archiving software-as-a-service (SaaS) offerings.
Cloud computing “definitely exposes weaknesses in networking,” Villars said, but “the closest point to the end user is the cloud, if you want to distribute content to end users spread around the world.”
Other challenges include the growing pains major cloud infrastructures such as Amazon’s S3 have experienced over the last 18 months, and the potential risk of putting more enterprise data eggs in one service provider’s cloud data center basket. Villars points out, “I doubt Amazon has had more problems than a typical large enteprise, and they offer backup with geographic distribution for free.”
However, geographic distribution brings with it its own challenges, such as varying regulations among different countries. “There are regulatory problems with Europe,” Villars said. “Laws there say that if you have data on a European customer, yhou can’t move it out of Europe. If you want your cloud provider to spread copies between the U.S., Asia and Europe for global redundancy, that becomes an issue.”