November 19, 2009 10:41 AM
Posted by: Guest Author
Automated Storage Tiering
, Cloud Technologies
, Devang Panchigar
, Element Manager
, Storage Economics
, Storage Resource Management
, Thin provisioning
We’re pleased to welcome Devang Panchigar of StorageNerve into the community with this guest post on storage spending.
The Storage Economics Practice
We all buy storage, either in the SMB Space or at an Enterprise level. We use storage to run our business, to store structured and unstructured data. Data means everything these days. Without data we won’t necessarily be able to do business.
But have we thought about the economics associated with storage? As consumers, we tend to consume more than necessary at times if we want to have enough buffer, or if we anticipate projected growth, business requirements, customer requirements, technology improvements, and the list goes on.
Lets stop for a minute and try to figure out what can we do to potentially keep up with all the use cases above, but not grow the data storage as rapidly. Rather, let’s figure out means to compress, consolidate, and reduce footprint with our data.
I am in no way suggesting not to buy storage, but if a customer walks up to me and says, “We are growing our storage at 70% a year,” but when I look at their balance sheet and the numbers don’t reflect that growth, I will not buy into those storage growth numbers. Those are probably coming in from a vendor that is trying to push more products into the storage environment.
There are several aspects one should consider related to Storage Economics, how your shrinking IT budgets can still meet up with your growing business requirements, and what you can do to keep a balance between both.
With various aspects of Storage Economics below, some may be applicable in the SMB space, some in the enterprise space, and some really at all levels. These may turn into the building blocks of your Storage Economics practice:
- It’s important to know what storage do you have and where you have it.
- Try to move away from fat provisioning to thin provisioning.
- Front-end storage virtualization using standard storage arrays in the back end.
- Run non-vendor specific SRM (Storage Resource Management) tools for storage optimization and storage management.
- Having a storage management tool is a must. You can still perform your daily task using various element managers.
- Industry standard average storage utilization numbers range between 35 to 45%. If you can push your storage utilization number up to 75 to 80%, it will help you drive the cost down phenomenally.
- Implement deduplication; verify your storage array supports deduplication natively. If not, it should be implemented in various parts of your storage like backup, unstructured data, etc.
- Run a heterogeneous environment with multiple vendors in it to keep balance relating to price structures.
- Though ILM is a forgotten word these days, make sure you run tiering within your storage environment that can help you move your data from higher SLA tiers to lower SLA tiers for cost containment purposes.
- Consider after warranty support for your storage hardware to independent service providers rather than manufactures.
- Look at extending the life of your storage arrays from a typical between 2.5 years and 3 years to 6 years.
- Implement technologies like automated storage tiering, storage deduplication, storage compression and many more in the market today.
- Storage environments have gotten very complex over the years with new storage technologies and switching technologies. At the end of the day, invest into a technology that benefits your organization, your infrastructure, your business model and your requirements.
- Leverage the use of outsourced computing models including Cloud technologies available in the market today. Could be private clouds or public clouds or really a mesh of these clouds technologies and offerings.
- Budget for your storage requirements and try to live by those even if you have to take drastic measures to keep it under budget.
- Try to gain more operational efficiencies within the storage environment.
- Understand the TCO with any new storage purchase, as cost of new storage could include several aspects of implementation including migration, consulting, downtime, missed SLA’s, Training, etc.
- Try to reclaim your data or storage as old systems are retired or migrated.
- Check for inconsistencies in your Storage environment as those could result in missed SLA’s, downtime and penalties.
- Do not over provision and do not over budget. Its just storage, if you need more you can buy more, but having storage sitting there doing nothing for years in anticipation of being used one day will cause your efficiencies to slip heavily.
- Do not create unnecessary storage management tasks and processes for your storage environment.
- Having backups and good working backups is very important, but do not tie down your storage with numerous copies of snaps, clones, mirrors, BCV’s, etc for a rainy day, rather have a DR plan and copy a single instance of data remotely for DR purposes.
- Plot trends for your storage environment. See if trends can help you budget, forecast and provision your storage accurately.
- Remember the larger storage footprint you have, the larger your backup footprints will be, causing more storage space, more backup time windows, more network traffic, slower response times, more tapes, more offsite backups, more backup management cost and possibly more licensing cost.
- Get away from managing islands of storage; rather move to a more centralized storage management, long-term effects are amazing.
- Try to reduce licensing cost around storage software. The less storage you deploy, the less licensing per TB cost that you will pay.
There are numerous areas of storage management that the customers can try to bring in efficiencies that will help them better manage storage, reduce footprint, and reduce CAPEX and OPEX. It starts as a small practice within organizations and the value it creates grips the rest of the IT management teams.
So take this opportunity and plant the seeds for your Storage Economics practice now.
With more than 7 Years of IT experience, Devang is currently the Director of Technology Solutions and IT Operations at Computer Data Source, Inc. Along with various industry certifications, Devang holds a Bachelor of Science from South Gujarat University, India and a Master of Science in Computer Science from North Carolina A&T State University. You can catch Devang’s Storage Blog at StorageNerve.com and enterprise commentary at GestaltIT.com
November 19, 2009 8:59 AM
Posted by: Michael Morisy
In IT, how often is is that the wires that get crossed aren’t electric? By one estimate, almost 80% of people spend two hours or more a day on e-mail.
That’s a lot of time for miscommunication to happen. Even if a poorly worded message sends someone off to do slightly the wrong task, valuable time has been wasted and the sender might be jeopardizing their shot at a promotion down the road. I spoke with Dianna Booher, author of E-Writing: 21st Century Tools for Effective Communication, and she said poorly-worded e-mails can even be career killers.
And it’s not those accidental “Reply All”‘s that kill careers: It’s the e-mails that are only circulated internally, maybe even only to handful of people, that can come back to haunt employees without them even knowing it.
“Writing outside of the organization, nobody who controls your paycheck will likely read it,” Dianna said. “Most e-mails internally get stuck in the files, circulated to 8 other people on the team, and they make impressions that last for a long, long time – and they are a picture of your thinking process. If they are disorganized, if they omit details that are relevant or if they are confusing, that’s a reflection of how you think.”
Booher offered to share some advice on what, in her opinion, makes a good e-mail:
1) Avoid knee-jerk responses: Email’s greatest benefit can also be its greatest drawback: speed. We open. We read. We reply. Then we think–or don’t, as the case may be.
2) If you don’t have something to say, don’t say it: On the street,when someone you know speaks to you, etiquette requires that you return the greeting. Not so with email. Don’t clutter up others’ in-box with inane responses.
3) Tune in to the tone of directives: Brief is good. Blunt is not.
November 17, 2009 6:52 AM
Posted by: Michael Morisy
A stunning 96% of security products up for certification fail to achieve it on their first go, claims a report put out by ICSA Labs, a certification division of Verizon Business. The most common reasons for failure?
The report found the number one reason why a product fails during initial testing is that it doesn’t adequately perform as intended. Across seven product categories core product functionality accounted for 78 percent of initial test failures. For example, an anti-virus product failing to prevent infection and for firewalls or an IPS product not filtering malicious traffic.
The failure of a product to completely and accurately log data was the second most common reason. Incomplete or inaccurate logging of who did what and when accounted for 58 percent of initial failures.
Is it time to head for the hills? Well, maybe not: A security certification authority telling you un-certified products simply don’t work is a little bit like a rabbi telling you bacon isn’t worth the health risks: I’ll take my bacon, thank you very much, and you should probably keep using security products.
I thought Alan Shimel had an interesting take which might strike to the heart of the problem: It’s not that the products don’t work, it’s that they aren’t working the way they’re installed.
Now, you have to take all of this with a grain of salt because of where the report is coming from. Obviously ICSA admittedly has a vested interest in seeing more products get tested and users demanding that products are tested prior to buying. But from my experience with far too many security tools, without some expert implementation getting this stuff to work as intended is worse then putting together one of those do it yourself pieces of furniture that you get from Staples or Office Depot. As an industry we have to do better to making our solutions easier to install, easier to use and easier to see the value.ashimmy.com, The Ashimmy Blog, Nov 2009
So often, implementation and execution are half (or more) of the battle. Larry Walsh over on ChannelInsider worries about a larger threat from the report, however: That proper protection will simply take a backseat as users conclude that security doesn’t work anyways, so why bother.
The problem with this report is that it’s coming at a time when end users are questioning the value of the products they’ve spent millions of dollars on. While even bad security products will provide some level of threat protection, the ICSA findings could give end users some reason for pause when considering new purchases. Many security solution providers are complaining that end users—particularly SMBs—are reticent to invest in new security technologies because they don’t believe they’re at risk or don’t have the budget. The ICSA findings could give them a new reason to doubt the need for security investment.
I imagine those users will be in the minority: There are still too many high-profile data leakage cases, with ever increasing fines, for business owners. What do you think? Have you seen security products fail to operate as promised, or operate at all? Let me know in the comments or at Michael@ITKnowledgeExchange.com.
November 16, 2009 10:41 AM
Posted by: Michael Morisy
, IT policies
, Wall Street Journal
The Wall Street Journal gives an inside cover today to an old question: Why can’t I pick the technology I use in the office? (Skip the paywall with Google) The Wall Street Journal’s certainly not the first to address the topic: Slate tackled it this past summer, with countless office workers grumbling the same questions well before, during and after these and other pieces.
The article tackles the costs, infrastructure and support challenges in handing over IT decisions to users, but generally is pretty keen on a rosy future where companies cut costs using consumer tools, support for non-standard choices is handled via internal user self-help forums, and data leakage is taken care of via virtual machines launching here, there and everywhere.
Read it and let me know what you think, in the comments, on Twitter at @Morisy, or via Michael@ITKnowledgeExchange.com. I’m more than happy to keep your information private if requested.
More on users and IT:
November 13, 2009 3:14 PM
Posted by: Michael Morisy
Working for the man might land two programmers in jail. Of course, it wasn’t just any man: Their boss, Bernie Madoff; their office, the now infamous House 17; their project, technical support for his $18 billion scam.
As Reuters reports:
The FBI arrested Jerome O’Hara, 46, and George Perez, 43, at their homes on Friday morning on criminal charges of conspiracy for falsifying books and records at both the broker-dealer and investment arms of Bernard L. Madoff Investment Securities LLC in New York.
“The computer codes and random algorithms they allegedly designed served to deceive investors and regulators and concealed Madoff’s crimes,” said federal prosecutor Preet Bharara. “They have been charged for their roles in Madoff’s epic fraud, and the investigation remains ongoing.”
The max sentence for the duo for their part in the fraud: 30 years in jail plus millions in fines.
November 11, 2009 2:08 PM
Posted by: Michael Morisy
, IT Project Failures
, Link Bait
, Project Management
, Sesame Street
10. Focus on the fundamentals. Sesame Street tackles a whole host of issues, from basic counting and the alphabet to overcoming cultural differences and even death. For the most part, however, the issues are key elements of early development: Not always easy, but necessary. Are the projects and problems you’re tackling necessary to the bottom line? Will they give a return to the business?
9. Speak different languages. Early on, Sesame Street emphasized the importance of learning foreign languages, even if it was just the basics, such as the Count learning to say uno to diez in Spanish. More now than ever, it’s critical that IT learns to speak in business terms to explain value, as recent guest blogger Claude Roeltgen noted. So-called soft skills can save a career, and really, it’s just a matter of saying what you need and what you can do in the right language.
[kml_flashembed movie="http://www.youtube.com/v/Jg3WY2Sgxtw" width="425" height="350" wmode="transparent" /]
8. Learn to count. Or better yet, teach others to count. Just as our dear friend The Count spent painstaking hours teaching others to count from one to ten (in English and beyond!), IT must teach the rest of the business how IT enables profits and performance. And if you let others do the counting? Expect IT to become a cost center, with aggressive accounting for every dollar and annual budget fights.
7. Be wary of strangers. Sesame Street wins over adult fans with copious guest stars, running the gamut of celebrities, athletes and musicians. But these guests are introduced by trusted adults on the show, and viewers learn that while you shouldn’t fear people different than you, you also shouldn’t give them your complete trust until they’ve earned it. What are your security policies, and what do you do to ensure that temporary workers or outside consultants have what they need — but nothing else?
6. It’s not (always) easy being green. Kermit the Frog was right: No matter often people tout the benefits of “going green,” cutting costs while saving energy can be full of trade-offs. There’s always new equipment to buy, new processes to manage, and while there may be a green revolution, there’s a premium to be paid for leading that charge. On the other hand, Kermit did get the girl and in many cases the energy savings from a comprehensive, business-savvy “green” policy can bring home the bacon at the end of the day.
November 10, 2009 10:17 AM
Posted by: Michael Morisy
, Bernie Madoff
When an Investment Dealer’s Digest article lumped some of the blame for Bernie Madoff’s scam onto the AS/400 (“The Technology Behind the Scam”) and Madoff’s “antiquated systems,” IBM’s venerable business system, the iSeries developer community was quick to defend its fabled friend. After all, technologies don’t scam people, people scam people.
John Dodge does dig up some juicy details on the Ponzi scheme’s execution based on forensic reports:
“[House 17] was a closed system, separate and distinct from any computer system utilized by the other BLMIS business units; consistent with one designed to mass produce fictitious customer statements,” according to Looby’s declaration. House 17′s expressed purpose was to maintain phony records and crank out millions of phony IRS 1099s on capital gains and dividends, trade confirmations, management reports and customer statements.
The AS/400 was like a giant Selectric — indeed, the Application System/400 is a multipurpose server that’s very good at printing. IBM publishes several technical overviews for IT professionals known as “RedBooks” on the AS/400′s extensive printing capabilities and also offers printing and forms design software for it.
But does the AS/400 actually make it any easier to perpetrate an $18 billion scam? Or is it simply a reliable Wall Street standard, a poor technology caught up in the wrong place at the wrong time with the wrong crowd? Vernon Hamberg, a software architect and regular on the Midrange technical dicussion list, wrote a spirited defense of the platform, which he kindly offered to let me publish here:
I read with interest the article by John Dodge about technology behind the Madoff scam. It appears, from a quick read, to put much of the blame squarely on the AS/400 – the technology in question. I strongly object to this – it is, in my opinion, completely wrong-headed. I learned long ago that computers are stupid – they do exactly what you tell them, not what you want. If things were done on these systems that allowed Madoff to carry out his Ponzi scheme, it is not the system’s fault. It is some programmer, some auditor, some whatever human being behind it all.
I am a computer professional who works on these so-called legacy systems – a false categorization, unless you lump Unix systems in along with it. (Unix came out over 40 years ago – shall we talk legacy?) The IBM midrange systems have a tremendous feature, backward-compatibility – anything you wrote 20 years ago can be compiled on current systems without any change in source code. Talk to us about VB.net – about API calls in Windows that don’t work in the next release.
This strength of the system was exploited by a human – the extreme segregation of computing resources that let Madoff get away with his scheme. Mr Dodge’s report of the printing characteristics – well, it is a very narrow presentation of the system’s capabilities. That seems completely beside the point. And this is not unique to these systems. At all!! A distinction without a difference.
I appreciate you taking the time to read this. I ask you to publish a retraction or clarification – e.g., that the technology behind it was NOT to blame. Perhaps something about the true strengths of the platform and how human beings were able to take those strengths and fleece other people in such a way. THAT would be an interesting study in human nature – not the veiled suggestion of culpability of any technology as against that of those who use it.
Vernon M. Hamberg
RJS Software Systems
What are your thoughts? Does complex, custom legacy software make it easier to quietly caper, or are villains just villains, no matter how shiny the software and technology? I’d love to hear your thoughts in the comments or at Michael@ITKnowledgeExchange.com.
More on the Bernie Madoff scam:
November 9, 2009 4:11 PM
Posted by: Michael Morisy
Security guru Bruce Schneier recently noted some Columbia University research on “Laissez-Faire File Sharing,” which advocates allowing users to set their own sharing permissions, with a focus on access auditing rather than access control (administrator policies don’t stop users from receiving or sharing a file, but all the viewers and editors of that file are then logged for later review and flagging).
Schneier simplifies it as a Wikipedian ideal (“Everybody has access to everything, but there are audit mechanisms in place to prevent abuse”), but that shortchanges the idea. Not all users can access files, for example: They must be granted access by a current user. The paper’s authors argue that this is already happening in an underground IT economy through e-mail attachments, USB thumbdrives and other workarounds, and that by working with the system, rather than against it, the new paradigm has the potential the “potential to increase both productivity and security.”
The paper outlines 5 cornerstones of Laissez-Faire File Sharing: Continued »
November 5, 2009 9:12 AM
Posted by: Michael Morisy
A newly disclosed SSL security hole allows savvy attackers to inject data into supposedly secure streams of the encryption standard, but while standards bodies and major vendors are quickly working to plug the vulnerability, it seems the attack avenues are currently relatively minimal.
As The Register reported on the SSL bug:
Indeed, Moxie Marlinspike a security researcher who has repeatedly exposed serious shortcomings in SSL, said the attacks were hard to pull off in the real world, in large part because they appeared to target a rarely used technology known as client certificate authentication.
“It’s clever, but to my knowledge the common cases in which the majority of people use SSL (webmail, online banking, etc.) are currently unaffected,” he wrote in an email. “I haven’t found these attacks to be very useful in practice.”
The security hole has been known since August in some circles, with ICASI (Industry Consortium for Advancement of Security on the Internet) heading up “Project Mogul,” an attempt to roll out an industry-wide set of security patches in a coordinated manner.