A short blurb in the Israeli daily Haaretz has the storage world talking: IBM is negotiating to acquire data storage startup Storewize for $140 million. Rather than offering storage solutions directly, Storewize focuses on boosting existing network-attached storage by compressing and decompressing data before it’s stored or accessed with a drop-in appliance, similar to how more widely-deployed WAN optimization technology works. Storewize claims it can reduce storage utilization by 50% to 90%, reducing an organization’s data center footprints, cooling costs and necessary hardware all in one fell swoop.
With public customers like Polycom reducing storage needs upwards of 60% on everything from disaster recovery backups to Oracle databases, it’s no surprise IBM took notice, offering Storewize’s STN-6000p series through its business partner program.
Organizations could certainly use any opportunity to cut down on storage demands: While costs per gigabyte continue to drop, facilities costs for data centers remain high and the amount of data that needs to be stored has increased six-fold in the past four years with no signs of slowing.
Are we seeing the beginnings of a storage compression war, just as we saw a storage deduplication war start three years ago? Quite possibly: The Register reports that Storewize competitor Exar will likely get a boost as other storage vendors look to beef up their own optimization offerings. If war is imminent, vendors and users alike might take Arun Taneja’s advice from last time around, when he warned that while the need was very real, results and benefits from the current technology need to be fully vetted:
So, be prepared to see a barrage of data coming your way. I am suggesting to the vendor community that they run their products using a common dataset to identify the differences in approaches. I think you should insist on it. Without that, the onus is on you to convert their internal benchmarks to how it might perform in your environment. You may even need to try the system using your own data
What do you think? Are storage compression appliances going to be standard issue in your upcoming storage deployments, and can you trust the public case studies to apply to your situation? Let me know in the comments or e-mail me directly.
Just because a business is considered small doesn’t make its data storage any less complex or crucial to success. Cisco’s Wednesday release of its Small Business NSS 300 Series Smart Storage series, specifically geared toward businesses with fewer than 100 employees, is a step in the right direction for the previously-neglected SMB sector, reports IT News. This release follows suit of big names like Netgear, HP and others who have set the “larger industry trend of networking or data center companies attempting to be all things to their customers.”
IDC Analyst Ray Boggs attributes it to the “blurring of a lot of this technology where storage is associated with security, which is associated with the Internet, which then gets you into networking.”
The Nitty Gritty
- The series offers two, four and six-bay desktop network storage boxes with up to 12TB capacity based on 2TB SATA drives.
- The storage boxes can be configured as NAS or iSCSI target devices.
- It’ll run you between $900 – $6,000, depending on capacity and functionality.
- There is an accompanying service plan, Cisco Small Business Pro Service, will run you about $150. This includes three years of tech support – software updates, hardware replacement, and call support.
- The series includes software for setting up a Web server, designing a website, on-disk data encryption to protect against compromised hard drives, and the WordPress blogging platform.
What is your small business currently using as its storage platform? What are you hopes for the future of SMB storage solutions? Let me know in the comments or email me directly at Melanie@ITKnowledgeExchange.com.
According to Aguacer0:
Tape will never die due to compliance issues. It is being replaced as near-line backup due to performance and capacity. Nowadays you will see backups being executed to disk first and then tapes (D-D-T). Now with technologies like dedupe and VTL, tapes are coming from 2nd step to 4 step for archival purposes. But tape will never die.
Mrdenny agrees, speculating that tape will last as long as off-site archives are needed. Meandyou uses tape for data backup: virtual tape for onsite storage and cartridge tape for offsite. ToddN2000 prefers optical media to tape, due to its superior durability.
Dee101 weighs in that the cost of replacing all of the data already allocated to tape is far too weighty an undertaking, especially in a market of quickly evolving disk standards. That didn’t stop Dee101 from switching his own archives to disk recently in order to “be prepared to continue to evolve on disk.”
Petkoa doesn’t like tape, period. Instead, his company relies on disks for quick restore backups and optical storage for “long-term irrevocable backups.” Tapes – “slow and overwritable” – present a major liability “if (maybe when?) [they are] hacked.”
What are your experiences with tape storage? Have you migrated to disk or another form of data backup and archiving that you prefer? Let us know in the comments section or over at the forum, then check out HP’s blog post outlining six reasons why tape is here to stay.
Microsoft’s TechEd 2010 North America — the conference for “cutting-edge insights and expertise that will make life easier for you (and everyone else) at work” — kicked off on June 7th and is running through June 10th in New Orleans. Some of TechTarget’s top editorial bloggers are down there, soaking up what Microsoft’s dishing out, and we’ve compiled some of the highlights of their highlights right here.
Brendan Cournoyer hit on the high points of TechEd over at the Windows Server Notebook blog. First up, the keynote speech, which he said was dominated by cloud talk with the main theme of “how IT professionals can extend the tools and data they currently use on premise (Active Directory, System Center Products, etc.) to cloud computing environments.” President of server and tools business Bob Muglia named the server as the “heart of the cloud,” and articulated that the “company’s goal is to bring all the features of Windows Server and SQL Server to Azure.”
The June 7th announcement of the release of the beta version of Service Pack 1 for Windows 7 and Windows Server 2008 R2, slated for release at the end of July, created quite the buzz as well. Ed Tittel over at Windows Enterprise Desktop speculated that Microsoft is emphasizing SP1′s lack of new and specific features to discourage users from waiting for its release, encouraging them to act now and adopt Windows 7. Apparently the features available in July’s SP1 are already available through Windows Update. Cournoyer noted that SP1 will have “Dynamic Memory for Hyper-V and RemoteFX functionality for VDI.”
Other updates and releases from the Windows Server Notebook:
- Windows Server AppFabric is now available.
- Senior marketing manager for mobile communications August Valdez gave a demo of the Windows Phone 7′s features and capabilities. He described it as “basically a portable phone-version of the Office suite.”
- Internet Explorer 9 was mentioned, which will include “new ‘graphics acceleration’ features to take advantage of the latest standards and scenarios for Web browsing.”
- Windows Intune in beta was also announced, a platform for an environment of PC updating and management tasks such as inventory and patching. Is it “a glimpse of the future” as promised by Muglia?
- The cloud made an appearance again during Amir Netz’s presentation of Microsoft PowerPivot, supposedly where “business intelligence can be combined with — you guessed it — the cloud.” The demonstration included data made available through SharePoint 2010 and SQL Server Analysis Services and integrated with Bing! maps.
For more updates straight from Tech Heads in NOLA, check out their Twitter feed @TechEd_NA or follow Brendan’s coverage at @WindowsTT. Are you at TechEd? What are some of your favorite highlights? Let me know in the comments or email me directly at Melanie@ITKnowledgeExchange.com.
Feel stuck at a legacy company fighting smaller, nimbler competitors? Want to know how your IT shop is supposed to deliver the latest SaaS and cloud solutions when half your stores still run the original cloud software, AS/400? Today’s guest post by Keith Morrow, former CIO of Blockbuster and 7-eleven and current president of K. Morrow Associates, might help give you the insights you need to succeed. Check back: We’ll be carrying more of Keith’s writing soon.
Since the arrival of online commerce 15 years ago, there have been few technology trends that have the potential to revolutionize the retail industry like the ones we see in mobile computing, social networking, and cloud computing. As retail executives, we are challenged to deliver innovative applications that satisfy the customers’ demand for commerce content and transactional ability from anywhere, using any of the always-connected consumer devices: smart phones, set-top boxes, and even internet-connected picture frames. Furthermore, due to the great recession, the notoriously limited retail IT budget is now even tighter, so we have to deliver these applications at little or no additional cost.
As someone who has been in the same boat, I want to offer, over a series of posts, several practical insights for creating innovative, API-enabled applications. I believe the time to take action is now, and if we do things smartly, limited budget or aggressive timeline should not be an impediment.
Find the Perfect Entry Point, Not the Perfect Roadmap
The company that I worked for rented movies and games through retail outlets. We needed to act fast to counter the challenges posed by our competitors who were delivering the goods by mail and digitally via the web and set-top boxes.
Instead of doing lengthy research to discover the perfect product roadmap, we decided to focus on enabling developers and partners to innovate on our core service. We quickly launched a very focused API service as an entry point, and developed a simple iPhone app as a proof-of-concept for building an application on that API.
Then, working with our partners, we quickly created and released an application that the customers can use from their mobile devices or set-top boxes. Using this application, the customers can search for movie titles, browse for recommendations, and decide whether they want to watch the movie immediately through their set-top box, browser, or pick up the movie from the closest retail outlet.
The most critical decision was to not wait too long to make a decision. Once customers were using the initial app, we enabled them to give us feedback that we monitored closely, which in turn allowed us to iterate and improve upon the initial entry point.
In my next entry, I’ll look into ways that retailers can shed a more conservative, traditional mindset and embrace new ways for deploying new apps.
As they say, incentive makes perfect, and now there’s incentive—in addition to increased efficiency and lower cost—to green up those data centers. The Environmental Protection Agency has announced “that stand-alone data centers or buildings that house them can now receive the agency’s official recognition for being in the top percentile of the most energy efficient buildings in the category,” as reported at DatacenterDynamics.
With increased and ongoing effort to hone methods for making data centers a greener operation by improving power management, the Energy Star rating creates a more tangible way to measure who’s ahead of the game and who’s falling behind, or whose Power Usage Effectiveness doesn’t fit into the bottom 25 percent. Perhaps it’s a last-ditch effort to counter the exponential growth in data and thus of data centers and energy consumption. From the Energy Star website:
In its 2007 Report to Congress on Server and Data Center Energy Efficiency Opportunities [PDF], EPA estimated that the nation’s servers and data centers consumed about 61 billion kilowatt-hours (kWh) in 2006 (1.5 percent of total U.S. electricity consumption) for a total electricity cost of about $4.5 billion. As one of the fastest growing sectors, national energy consumption by servers and data centers could nearly double by 2011 to more than 100 billion kWh, representing a $7.4 billion annual electricity cost. However, there is significant potential for energy efficiency improvements in data centers.
The Power Usage Effectiveness metric, developed by The Green Grid, piggybacks on the agency’s already-existing commercial buildings rating. There is also a ratings system in place for hardware such as servers, with a system for rating data center storage equipment in development stages.
Where do you expect your data center to rate on the EPA’s new energy efficiency scale? What have you done or what are you doing to improve?
When you hear the word “storage,” what typically comes to mind? Likely NAS, SANs, DAS, data centers and so on, right? Well, there’s another component to storage that tends to get overlooked…at least in the context of oversight, security, and compliance. That is: mobile storage. From smartphones to external hard drives to iPads and beyond, there’s easily as much storage capacity on your business’s mobile systems as there is in the traditional storage environment. That’s a big deal.
So how are you keeping that storage environment under wraps? Management will often proclaim,“We have a policy against placing sensitive information on mobile devices,” and go on to say, “We trust our users to do the right thing.” This is all fine and dandy on paper but in reality, there’s a lot of risk any given business is bearing due to unprotected – or underprotected – mobile storage. If I were a betting man – and I am – I’d venture to say that 95% of all storage-related risks is in your user’s hands, literally.
Keep this in mind when securing your storage systems; there’s a big payoff to be had if you do. Otherwise, when someone – such as a business partner, auditor, regulator, or opposing legal counsel – asks you if your storage environment is secure, what can you truthfully say?
For further reading, check out the pieces I’ve written for TechTarget on mobile storage security.
I’m in the middle of writing a whitepaper on data protection for CSOs, and it occurred to me just how often storage systems are overlooked in security testing. The typical security assessment involves servers, workstations, mobile devices, databases, Web applications, WiFi, and network infrastructure systems. You rarely see/hear anyone scoping storage systems in particular. Why is this? Do people just assume that they’re secure because they’re on a hardware appliance or they paid a gagillion dollars for them and surely someone thought about security along the way?
The reality is, if it has an on/off switch and an IP address, it’s fair game on the network. Not only do high-end NAS and SAN storage systems meet these criteria, but they also have other attack surfaces – especially Web interfaces – that make them that much more susceptible to attack. Unfortunately, such IPs and URLs may or may not be tested during any given internal vulnerability assessment depending on the scope and how deep the tester looks.
Whether you do it yourself or hire an independent information security consultant, when it comes time to scope your next security assessment, be sure to include your storage environment. If you don’t find the weaknesses, surely a bored or malicious insider will. Better to be proactive for something so critical to your business.
It’s a dark day for networking professionals, particularly the Cisco fans in their ranks: Apple’s iOS 4 will make their lives harder in more ways than one, but Cisco itself doesn’t seem to care.
While millions of gadgeteers worldwide oohed and ahead at the iPhone and iPad’s latest features, including video calling and better ads, networking professionals groaned because finding information about their bread and better, Cisco’s ubiquitous router and switch operating system IOS, just got harder.
As if it wasn’t bad enough that the iPhone and iPad are both notorious for causing trouble on campus networks, now Cisco fans face sifting through Apple-related information when trying to troubleshoot their hardware or just keep up on the latest news.
At first, many (including myself) though this might be a swipe by Apple at Cisco after the latter tried enforcing its iPhone trademark when Apple first launched their mobile device line. This time, however, Cisco did it all legal like, licensing rights to the term “iOS” from Cisco.
Looks like Cisco fans better brush up on their advanced search queries if they want to stay ahead of the Apple fanboys and girls.
The power of electricity has been long known, but only in recent years, with a focus on both being greener and saving money, that power management has again taking a high priority among large organizations. This guest post from Pam Seale, product marketing manager with Absolute Software, goes into some critical steps to keeping power consumption in line.
Few organizations will argue the value of power management policies. Not only is energy conservation an important part of environmental stewardship, but by defining when inactive computers switch to a lower power setting or turn off, organizations can easily reduce costs and please the bottom line.
It seems, however, that even fewer organizations are certain how to implement power management policies that both make sense for their work environment and are easily enforceable. So what are some of the key things to keep in mind when designing a power management policy?
Understand your current power reality. To find out where efficiencies can occur and to establish a benchmark to measure success, you need to know how much power your computers are currently consuming. The fastest way to collect this information is via a power management product-typically an installed agent that reports detailed metrics on energy use for each device and the overall fleet. You can also enlist the help of a power management ROI calculator to determine how much you can potentially save.
Note policy metric considerations.
- Work habits of internal teams: Power management schedules should be flexible to account for various users’ work hours; management products should allow you to define groups to which unique metrics can be applied.
- Power source settings: A battery-powered computer should probably be set to power down or hibernate after a shorter duration of inactivity than a device that is plugged into the wall. Power management policies should acknowledge this.
- Flexible actions: To accommodate diverse users, power management actions should be flexible-log out, hibernate, sleep modes, shut down, etc. Action triggers-how long a device must be inactive to activate them-should be equally accommodating.
Refine results. Power management tools should allow you to examine both current and historical power use, power on time, etc. Comparing these metrics will reveal where greater efficiency can be achieved.
Take advantage of rebates. Government and provider rebates, grants and subsidies are available to organizations that implement computer power management policies, and can cover some or all of the costs of your power management tools.
With the right tools in place and a basic understanding of your organization’s power use, power management policies need not be a daunting task. There are a number of products available today to help simplify the design and management of your power policies-it’s simply a matter of finding which solution supports the above capabilities and best fits your organization’s needs.
For more information on power management policies, and to learn about the power management capabilities of Absolute Manage – a cross-platform computer lifecycle management tool by Absolute Software – visit www.absolute.com/power or view the Absolute Software blog.