What’s the goal of data retention? Depends on the data. Sometimes it’s a second or two, so saving it is irrelevant. Other times, as with electronic health records, birth certificate data needs to be kept on file for 20 years or more in most states.
But how about 1,000 years? That’s the goal of Chris Puttick, CIO of Oxford Archaeology Ltd., which provides archaeology services for construction firms in Europe that need to comply with planning regulations. His job depends on strategic planning around data management.
“Archaeological data is extracted in a one-off ’experiment‘ with our teams on-site, excavating before the new road/airport/tunnel is built over or through it,” he told SearchCIO-Midmarket.com Features Writer Laura Smith. “What is observed, measured and photographed can never be repeated, leaving the resulting data the only surviving record of an archaeological site that had survived thousands of years before the excavation, or like this site, a mere 1,000 years, so our records should aim to be retained for at least as long, or the money and effort spent on the excavation was wasted.”
The corollary here is that what is stored must be found, so data retention strategies and technologies are equally important. And as data — and the corresponding information — consume more and more of our resources, it’s important to make management part of corporate governance.
One solution might be Generally Accepted Recordkeeping Principles, or GARP (save “the world according to” jokes), developed by ARMA International, which include accountability, transparency, integrity, protection, compliance, availability, retention and disposition.
If you want to learn more, log on to our virtual seminar on information governance on Sept. 16.
We are smack dab in the middle of hurricane season, but SMBs should be keeping an eye on well-intentioned employees in addition to the local forecast.
Like the intern who was hired to perform daily data backups to tape drives and mail them to the SMB’s disaster recovery location — an off-site vault. An IT manager decided to check up on the intern’s work after the intern left — and found that the tape drives had no data on them … zilch.
The architect who told me about this chuckled, partly because an intern was used to perform such an important part of his company’s disaster recovery plan, but also because, well, it was just bad quality control on the part of IT.
I look back on the summer jobs I had while in college and remember an example of bad quality control. I worked for an advertising agency that had me (an English major) making copies of floppy disks — its strategy for daily data backups. Did I know the importance of these mysterious black squares? No. Did I do a few things wrong? Oh yeah.
For example, a guide on how to make duplicate copies of your floppy disks says:
- Don’t allow them to come into contact with heat, dust, magnetic fields or electrical appliances.
- Do not keep all of your backup disks together in one place.
- Do not continually use one disk, as disks do wear out! One high-density disk can store a lot of text-related documents, but it is best to make several copies of your work on separate disks.
- It is best to use Windows Explorer or My Computer in Windows to copy files to floppy disks rather than application software such as your word processing or spreadsheet programs.
I didn’t know any of this, and neither did the person in charge of me — we used the same disk over and over.
I know that most SMBs have better quality control for data backups in place than these scenarios, and the technology has come a long way: disk-based backup options are dizzying, and there’s new cloud-based backups popping up all the time. Then again, a recent study by Enterprise Strategy Group found that on-site disk and tape are still the backup approaches of choice at most businesses.
Moving into September, SMB backup options is a topic that several experts will tackle on SearchCIOMidmarket.com, including how one CIO is sticking with tape drives as his primary backup plan, and why he’s not keen on cloud-based backup options.
If you read Linda Tucci’s recent story on the efforts of Tasty Baking Co. to find a workable solution for trade promotion management, you may have been struck, as I was, about why TPM can be such a challenge.
After all, most of the biggest retailers in the world practice some sort of trade promotion management, either via vendor or home-grown solutions. Yet there seems to be a lack of effective software tools for optimizing retail partner relationships, as well as few standards to rally around.
TPM is not a new concept either, but even a top analyst covering the field, Gartner’s Dale Hagemeyer, has not found significant movement in the field since his most recent report, “Seven Key Considerations When Choosing a TPM Solution.”
Yet Tasty’s CIO, Chan Kang, is faced with real issues as he seeks to work TPM into his tightening budget. Though the company’s direct store delivery model produces quality data, “What we don’t do enough is measure the effectiveness of those promotions: how much lift, what is the baseline, the incremental profit — in other words, whether it was a good idea,” Kang said.
Kang is evaluating vendors, but even though industry groups like Trade Promotion Management Associates and the Vendor Compliance Federation are working to promote solutions for TPM, Tasty could be still confronted by vendor lock-in and integration issues with whatever solution it integrates.
Some observers are skeptical that TPM standards can be achieved, but this is one area that seems like a no-brainer for the Oracles and SAPs of the world to come together for the common good. Such cooperation could only help to increase the bottom line — for everybody.
I don’t know about you, but to me, ITIL (or IT Infrastructure Library) is a little overwhelming. I’m only looking at using the ITIL framework as an IT service catalog tool, and I get a little lost.
Under ITIL guidelines, an IT service catalog is a subset of service-level management, which is a subset of service delivery. Service delivery is the topic of only one of eight ITIL books on IT Service Management (ITSM) guidelines, and that’s just in ITIL v2. ITIL v3 has five other books that update some of v2, but also introduce new ITSM strategies.
SearchCIO-Midmarket.com and SearchCIO.com recently ran a survey asking our readers about their ITIL use. We haven’t pulled together all of the results yet, but here’s a preview: When we asked readers to choose up to three areas in which they would like to see improvements to ITIL, they said:
- 35.4% — ITIL should provide more information on how ITIL works with other
process-improvement methodologies, like Six Sigma and Lean.
- 31.1% — ITIL should offer more prescriptive advice vs. just guidance.
- 12.9% — ITIL should include more specific advice on transition from v2 to v3.
- 12.5% — ITIL needs to be clearer on the differences between v2 and v3.
- 8.1% — ITIL v3 is too complex.
So, it’s clear that people would like more guidance and less complexity, but anecdotally, a few IT shops and service providers I’ve talked to recently said that ITIL does just the opposite: It clears up some complexity.
When an IT service catalog is being put together, ITIL tells the business and IT what terminology to use, they say.
“ITIL gets people speaking the same language,” said Matt French, marketing director with Service-now, an ITSM Software as a Service provider. “It makes it clear what an incident or a request is, and helps an organization with [corporate] terminology [that is different across the company] use the same terminology.”
ITIL also helps IT set the right expectations for service delivery in terms of service levels and what is possible — and not possible — as far as services the business wants to see and what IT can realistically deliver. It does this by setting the scope of an IT service catalog project, including taking inventory of the skill sets IT has on hand (or not) to deliver a service, and helps organizations choose a set of standard services.
Any advice on how to use ITIL to reduce complexity, or how you have been able to simplify ITIL at your shop? I’d like to hear from you. Email me at firstname.lastname@example.org.
If the sagging economy has forced midsized companies to delay hiring more IT staff, maybe migrating to Windows 7 can move them off that dime.
In a recent report by IDC, an impressive number of midsized companies migrating to Windows 7 say they realized a full return on their investment in just seven months. The migration also helped significantly reduce the time help desks spend dealing with malware downtime and reboots by replacing Windows XP and Windows Vista.
One midsized company says the money saved in migrating to Windows 7 has allowed it to hire some much-needed developers.
“Windows 7 gave us more cash to work with because we could throw it on a couple of hundred older PCs, so we didn’t have to buy new ones. Those savings will let us hire a couple of young developers to work on some internal applications we need pushed out,” said Joe Harmon, an IT purchasing agent with a midsized regional health care provider in western New York state. “I was surprised. Microsoft usually costs me money with some of their licensing plans.”
Costs were down in three important labor categories analyzed in the report: IT labor hours per PC, per year for deployment (down 45%); IT labor hours per PC, per year for service desk support (down 65; and IT labor hours per PC, per year for PC and operating system support (down 55%). In the 14 categories where a set of common end-user activities relating to the operation of Windows 7 was measured, savings resulted in 43 hours of productivity per year, per user.
Like IT professionals at other midmarket companies, Harmon also migrated to Windows 7 because Microsoft’s technical support for Windows XP, which includes regular delivery of security patches, is ending. Harmon said the built-in security in Windows 7 is superior to that of Windows XP, so he won’t be as reliant on security patches.
It’s nice to hear that some financial relief has finally arrived for SMBs, given how the Great Recession has ravaged them.
Every once in a while I like to check out what Microsoft admins are downloading these days. I sometimes do a search on Google, but to get a feel for the work being done in Microsoft shops, I always return to the Microsoft Download Center.
It lists the top five free Microsoft downloads in general, with some of the usual suspects, XP and Office compatibility. This goes to show that shops are still holding onto the older Windows OS — and moving some people to newer versions of Office.
Here are the top five free Microsoft downloads and the company’s descriptions for them:
Microsoft Office Compatibility Pack for Word, Excel and PowerPoint File Formats. Open, edit and save documents, workbooks and presentations in the Open XML file formats, which were introduced to Microsoft Office Word, Excel and PowerPoint beginning with Office 2007 and continuing with Office 2010.
DirectX End-User Runtime. Provides updates to 9.0c and previous versions of DirectX — the core Windows technology that drives high-speed multimedia and games on the PC.
Update for Windows XP (KB932823). Resolves an issue in which a user is unable to use Windows Internet Explorer 7 to download files on a computer that is running Windows XP with IME enabled.
.NET Framework Version 2.0 Redistributable Package (x86). Installs the .NET Framework runtime and associated files required to run applications developed to target the .NET Framework v2.0.
Microsoft .NET Framework 4 (Web Installer). Downloads and installs the .NET Framework components required to run on the target machine architecture and OS. An Internet connection is required during the installation. .NET Framework 4 is required to run and develop applications to target the .NET Framework 4.
Digging further, specifically looking at server management tools, what surprised me was that four of the top five weren’t tools in the sense that they fixed system problems. One is a case study on how a business benchmarked its PHP applications on Windows Server 2008, and there are a couple of how-tos, one on an Office Communication Server deployment, another on European data compliance.
Here are the free server management tools that have been downloaded the most out of 4, 821 choices, and Microsoft’s descriptions:
Security Update for Windows Server 2008 RC0 for Itanium-based Systems (KB941644). A security issue has been identified in TCP/IP that could allow an attacker to compromise your Windows-based system and gain control over it.
Microsoft Windows Server 2000 Assessment Configuration Pack for European Union Data Protection Directive (EUDPD).
This configuration pack contains configuration items intended to help you establish and validate a desired configuration for your Windows 2000 servers in order to support your European Union Data Protection Directive compliance efforts.
Customer Solution Case Study: Windows Server 2008 Charts a Secure and Flexible Roadmap for Virtual Map.
Optimization at work in Microsoft. A presentation at an executive breakfast seminar, The Business Impact of Infrastructure Optimization, held Feb. 6, 2007.
So do these downloads sync up with what’s going on in your shop, or do you have a set of your own free tools that you can’t live without? I’d like to hear about it. Email me at email@example.com.
Google CEO Eric Schmidt’s statement last week about the end of online anonymity recalls similar words by one of his former colleagues, Scott McNealy of Sun, more than a decade ago.
McNealy caused a stir back in 1999 when he said, “You have zero privacy. … Get over it.” McNealy was referring to the Intel Pentium III processor, which had a feature that could uniquely identify a user. The Internet had nothing to do with privacy in that context, but at a time when people were just getting used to using their credit card online, everybody who heard the statement could make the leap to the perils of e-commerce.
Schmidt said that society is facing major disruptions due to the incredible amount of online data being generated (“5 exabytes [or 5 billion GBs] … every two days” he said) — mostly user-generated content via blogs, message boards, Twitter, Facebook, etc.
Online anonymity is a paradox. People hide behind an anonymous email or forum comments, but there are ways to track you down. In addition, Facebook users may forget that the pictures that they posted from Friday night’s trip to the bar can be seen by everybody, even their bosses.
Predictive analysis of consumer behavior is inevitable, he said. “If I look at enough of your messaging and your location, and use artificial intelligence,” Schmidt said, “we can predict where you are going to go.”
That sounds a lot like the movie version of Philip K. Dick’s Minority Report, where Tom Cruise is pelted with personalized messages from every billboard he passes. But this is not science fiction. Electronics retailer Best Buy is partnering with Shopkick to enable your smartphone to communicate with the store for promotions and rewards points as you shop.
Online anonymity will be a casualty of the data-saturated world, Schmidt said. “The only way to manage this is true transparency and no anonymity. In a world of asynchronous threats, it is too dangerous for there not to be some way to identify you. We need a [verified] name service for people. Governments will demand it.”
Ironic, isn’t it. People yelled in protest at McNealy’s words in 1999. I doubt Schmidt’s will make much of a ripple, since nowadays the consumer is a willing accomplice in the end of anonymity.
Get over it? Now it’s more like, who cares?
In the past few years, Ross Pettit, client principal at Chicago-based ThoughtWorks Inc., has seen a shift in client requests. The agile software development consulting firm’s projects are still mainly grounded in custom application development but, more often, he said, organizations want to apply agile best practices to non-IT-related projects. Pettit suggests following these initial steps when adopting an agile project approach:
Develop a release planning stage in which aspects of the project are divided into smaller, more manageable chunks. This stage involves defining the problem in business, not technical, terms, choosing and pairing up team members from different aspects of the business and assigning project facilitators who can step in to remove obstacles.
Every week there is a checkpoint to gauge progress, the understanding of the problem, what needs to be done next, and how the team is tracking against the agreed-upon solution path. By having the primary stakeholders involved, the business problem is laid out for all to see and obstacles can be removed. “Week after week, there is tremendous transparency and exposure to all the stakeholders …,” Pettit said. “This allows stakeholders to make the resources available for what needs to be done, immediately.”
Be retrospective during each checkpoint. Ask what worked well, what worked poorly, what was confusing, and what to change. “On a weekly basis, this builds in mechanisms that create continuous improvements … where you have continuous planning and tremendous visibility,” he said.
It’s OK to fail, but fail fast. “The nice thing about agile is that it is not only OK to fail, but it’s really good to fail,” he said. “The more often you try and fail, the more you learn about the problem in front of you.”
Start with the hardest nut to crack. Pettit has seen agile projects fall apart because project teams decide to go after the simpler tasks first. “Too often, I see projects fail because they were able to get two or three easy things accomplished, then they get to the more difficult ones they were putting off, and they say ‘I don’t know if we’re up to that one,’” he said.
And even though a project may be focused on a business problem, the CIO will be called on to act as an agile executive. The CIO is the one who can pay attention to all the data coming out of all the projects. “The CIO can make decisions outside of the context of the project, that others may not feel empowered to make, and will see problems that others may not by looking from the outside in,” he said.
Working on a new project and hitting a few bumps? Send me your project problems and I’ll post them on our blog — without your name, or company name, of course — to see if your peers have any advice. Email me at firstname.lastname@example.org.
For some firms that dole out smartphones to their employees, the mobile security policy might consist of remotely wiping the phone if it’s lost or stolen, or if the employee leaves the company. Other than that, users are left pretty much on their own.
That’s not enough, especially as smartphones and apps become smarter and more pervasive. It’s one thing for a company to provision its own smartphone. But many employees are bringing their own smartphones and doing company business on them — which some companies encourage, by the way, to cut down on cell, data plan and management costs.
Which makes last week’s news from the Black Hat conference all the more unnerving. Researchers announced that wallpaper apps on Android phones can collect and transmit data such as phone numbers and messages. In other cases, remote apps can be installed, which could be used to drop malicious payloads on to a phone.
Given the amount of work being down out of the palm of one’s hand these days, and even that mobile health care app stores are coming, the news is startling.
As with anything and security, enough is never enough. Checklists are a start, but which checklists are the right ones? For midmarket IT managers looking for some answers, SearchCIO-Midmarket.com’s sample mobile device management policies and templates is a good place to start.
I don’t want to spoil the fun or productivity that users are enjoying with iPhones, BlackBerrys and Androids, but if attention isn’t paid to them by IT security, then they will just become another access point into your networks.
How frustrating is it to be one of the little guys without the resources to bear for big projects. Well, those frustrations may actually be easier to deal with than the alternative: Being a large company entrenched in legacy systems.
I came across a blog post recently by Scott Adams, describing a fantasy world where countries weren’t so paralyzed by legacy systems that they couldn’t move forward. Older systems have too many vested interests, making it difficult to change because “anything that has been around for a while is a complicated and inconvenient mess compared to what its ideal form could be,” Adams wrote.
This got me thinking about the real legacy systems holding back IT: The projects that never go anywhere, not because the project itself is bad, but because the change would overwhelm the systems, strategies, processes and workflows that may not integrate with the change. How, too often, the need for agility is acknowledged, but never addressed. And, more importantly here, how the bigger you are, the more difficult it is to change.
So here’s to all the small IT shops out there, where each member of the staff wears multiple hats and puts in the extra hours because, let’s face it, who else will? Being smaller means you’re closer to the top, closer to making a difference. Yes, your budget is probably smaller and you can’t get to all the things you know you need to tackle, but there are also far fewer hoops to jump through and approval signatures to get.
If you take all of that into consideration, maybe the grass is greener on this side: You can quickly adopt new technologies and there is less red tape. And you probably aren’t so buried in legacy systems that every infrastructure change requires duct tape and a sprinkle of IT magic to get it working.