Ask the IT Consultant

Boston SIM Consultants' Roundtable Blog


December 1, 2009  9:00 PM

Where has Technology Innovation Gone?



Posted by: Beth Cohen
innovation, IT Innovation, new, technology

Question: The venture investment community seems to be only interested in sure bets based on proven technology. Is innovation really dead?

Unlike what happened in the aftermath of the dotcom bust in 2002-2005 where there was tremendous investment and development of new technology from the entrepreneurial community, this downturn has not been as technologically fruitful. For the last year or so, the IT market has been coasting on previous technology breakthroughs for systems and storage virtualization, WAN optimization, wireless networking and the increased availability of SaaS (Software as a Service) offerings of all stripes. Those technologies are all innovations that came out of the telecom and technology downturn in 2001 that followed the dotcom boom of the late 1990’s. In the early parts of this decade, underemployed startup engineers were taking the half baked ideas from the dotcom era and turning them into practical products. The translation of the ASP (Application Service Provider) concept to SaaS is the perfect example of this process. Sadly, with the major innovations in these areas behind us, most of the recent product offerings are focused on incremental improvements rather than technologies that turn the status quo on its head.

Is there any hope that the growth of innovation will pick up again soon? I hope so, but the reasons for the slowdown are not for lack of new ideas. The meltdown of the financial markets a year ago which has meant reduced access to capital for new enterprises has in the short-term translated into entrepreneurs focusing on more potentially lucrative markets such as the growing medical and consumer areas. This does not mean to say that there has been no innovation at all, but from what I have seen the smart engineers are not spending much time devoted to developing the next killer business application. Why should they when the enterprise isn’t particularly inclined to be buying these days?

So what are the newly underemployed engineers working on these days? There is no doubt that smarter mobile devices are the next big thing. The level of innovation in this area matches the heady days of the early Internet gold rush. It seems like every other day there is an announcement of a new device, new application or a new service available to capitalize on the seemly insatiable market for pocket sized mobile systems that do more. Mobile email, texting, twittering and other rapid communication delivery systems are becoming the norm outside of the business community and are poised to sweep through the enterprise as well. The continued development of smaller or pocket devices that do more on a smaller footprint is ultimately going to transform business, but while many companies have not gotten much past Blackberries for their executives and road warriors, the new generation is using them for everything from romance to job hunting.

About the Author

Beth Cohen, Luth Computer Specialists, Inc.

November 24, 2009  11:00 PM

Preparing IT for Flu Epidemics



Posted by: ITKE
business continuity, Business Value, Disaster Recovery

Question: Are there ways we can prepare for a large percentage of our technical support staff out with H1N1 flu?

With the heightened interest in how to respond to the H1N1 pandemic, every organization should be considering how to manage production support operations in the face of high absenteeism rates that could exceed 30%. Because the illness often hits suddenly, staff members could be sick or home caring for a family member and unable to work. Temporarily losing key individuals, such as system administrators and DBAs, can be traumatic without proper planning and redundancy. A good response plan should focus on assuring that necessary skill sets are available when needed. The following suggested approach is a good start towards making sure you are covered:

  • Identify the critical functions and performance timeframe. This information may have already been gathered as part of a business impact analysis. If not, draw up a simple list of the functions or tasks and how time critical they are.
  • List the skills and knowledge required to perform critical functions and the staff that possesses them. These might include UNIX administration or knowledge of a custom finance application. Management and the operational staff will know.
  • Identify the primary and secondary staff members who can provide backup for each task or skill. In particular, identify critical skills that are possessed by only one staff member. Gaps such as these are the biggest risks.
  • Develop a plan for backfilling those critical skills. This may include documenting procedures and training other staff members or locating an outside resource to provide the skill on a temporary basis.
  • Practice running operations using backup staff and documentation. This validates the ability of the backup staff to perform the tasks and also provides on-the-job training and job enrichment opportunities
  • Plan for working at home (WAH). In many organizations, technical staff members are already required to be available 7×24, so the mechanisms are in place.
  • Develop a contingency plan for reducing workload when absenteeism is high. Discuss with senior management the possibility of performing only minimal system changes and delaying major deployments to reduce risk and maintain system stability. Given the possible business implications of such a plan, buy-in from all stakeholders is essential. Define conditions and triggers for putting the plan in action.

Having to deliver services without a full staff is a situation that every organization encounters sooner or later. It can be triggered by events other than a flu pandemic. Preparing for it will make your organization more resilient and provide unexpected benefits.

About the Author

John McWilliams, JH McWilliams & Associates, Business Continuity Consultants


November 6, 2009  6:00 PM

Tracking the Elusive Customer



Posted by: Beth Cohen
customers, marketing, web 2.0

Question:  Now that we have Twitter Linkedin, FaceBook and 7/24 connectivity why is it still so difficult to communicate with customers?

With the Web 2.0 social networking revolution and the now ubiquitous Blackberry permanently attached to every executive’s belt, there seems to be a thousand ways we can reach out and touch our co-workers, customers, friends, neighbors, and old classmates — pretty much anyone on the planet at will.  Certainly the sheer number of potential sales channels has increased exponentially – blogs, Twitter, you name it.  You would think that with all these new tools, it would be easier than ever to get your message out to current and future customers.

The good news is that potential customers who are interested in the goods and services your company is offering are now often sophisticated and knowledgeable about the products and options.  For companies selling complex software or services this is a great boon that can represent a shortened sales cycle, but on the flip side, along with all these new communication tools everyone seems to have simultaneously reached the same level of information saturation and burnout.  Potential customers are tuning out in droves since they cannot even keep up with the flood of messages from their co-workers and friends.  I have noticed a sharp rise in the number of highly connected and professional people I have spoken recently to who have stopped reading their email, listening to their voice mail (which is so passé anyway), looking at media channels, and can only be reached by direct connections or in person – texting seems to be the communication tool of choice these days.  What this means for marketing departments, is that all those great marketing campaigns you slaved over are simply getting ignored.

So in this new era of information overload, how can you reach customers?  The answer is not really very surprising; sales and marketing has always been about the personal touch and spending the time to develop real relationships with your customers.  Despite all the new fangled ways of communicating, in the end it gets down to basics – building a strong network of personal connections will land the sale every time.

About the Author

Beth Cohen, Luth Computer Specialists, Inc.


October 23, 2009  3:00 PM

Is Anyone Guarding the Internet Henhouse?



Posted by: Beth Cohen
business responsibility, Consumer IT technology, Identity theft, IT security, Security

Question:  With all the rampant fraud and identity theft on the Internet, why are consumers responsible for protecting their data when they have so little control?

The assumption is that if data is compromised or one’s identity stolen it is somehow the victim’s fault.  The picture is painted that we are responsible for managing our own security.  To a certain extent that is true and we should, as responsible citizens, practice basic network security hygiene.  Yet, we are constantly barraged with advice telling us to install data protection software, invent complex passwords, change them often and monitor our financial activities.  Is it really our fault when the system lets us down and our money is stolen, our identity compromised or our computers are hacked?

I would argue that the reality is far different.  We as consumers do not have much control over the security of our data or how secure our computer’s operating systems are.  Even if we pay for everything in cash, if we have a bank account, we are open to fraud.  One of my students recently pointed out that the Internet can be thought of as a giant recording device.  Everything that is ever posted to the net is still out there to be found and possibly used for nefarious purposes. Once our money enters the global financial system we have little or no say over who touches the information and what they do with it.

The average computer user should not be required to be a sophisticated network security professional to use the Internet services.  Consumer protection laws were originally put in place back in the early/mid 20th century because we came to realize that if we purchased something that wasn’t what we thought it was, it was not because we weren’t smart shoppers, it was because the buyer/seller relationship was too skewed towards the sellers and not enough power was in the hands of the buyers to make informed decisions.

It is time that we come to the understanding that the Internet is entering a similar phase in its market maturity.  Companies need to regain or maintain consumer trust.  As good corporate citizens, it is our responsibility to make sure to implement proper security measures to protect customers’ data.  The recent spate of laws in Massachusetts, the European Union and other places that are designed put the responsibility for the protection of personally identifiable data on the companies that are holding it is a step in the right direction.

About the Author

Beth Cohen, Luth Computer Specialists, Inc.


October 12, 2009  2:00 AM

Disaster Recovery and Data Integrity



Posted by: ITKE
Backup, business continuity, Data Integrity, Disaster Recovery

Question: When recovering an application at a remote disaster recovery site, how can the data’s integrity be verified before resuming production processing?

One of the primary goals of a disaster recovery (DR) plan is to protect business data.  The RTO (recovery time objective) component is often the most time consuming part of a DR plan.  The objective is to shorten the recovery task as much as possible while maintaining trust in your data so that the business can recommence regular operations quickly.  Since the complexity of many applications leave far too many places for bad data to hide, the following recommendations will help speed the recovery process.

Disaster Recovery Preparation

  • Implement the fastest replication to your DR site that budget and technology allow. The best option to minimize data loss is synchronous replication of your business critical data to a remote backup site. This ensures full data integrity because the replication logs will record data currency while the transactions are copied to both sites simultaneously.
  • The next best option is an asynchronous replication scheme. Even if the delay is measured in minutes or based on periodic file copies, you will still be more prone to some known level of data loss. However, since this option is considerably less expensive, the business owners might decide that the reduced costs will offset the increased risk of data lose. Make sure that everyone understands the impact of data loss on the business recovery process. This includes establishing data loss tolerances, and identifying likely types of data loss during the development of the DR plan.
  • Copy verifying data to your DR site along with the primary data. This means finding supporting data from outside the application that corroborates your database. This might include data input forms, activity logs, emails, checksums, or input files. Some of these may be paper-based and could be scanned or copied to the DR site.

Recovery Plan Preparation

  • Assess the data at the DR site by knowing both its currency (i.e. how old it is, as precisely as possible) and its consistency across multiple applications. Currency is important because it will identify any lost transactions or updates, while consistency is important because incomplete transactions can cause data corruption problems after you have resumed production.
  • Review the replication logs for the last data transmission to establish currency.
  • Compare verifying data at the DR site with the recovered data to follow specific transactions and identify the degree of data loss.
  • Have the business application users access the data and determine if recent changes to production data are found in the recovered data.
  • Execute a build acceptance test (BAT) like the ones used to verify application installation and integrity. These tests often point out data anomalies.
  • Have a process and tools in place for tracking and resolving data problems after you return the applications to production.

The above recommendations may not identify all the issues, but they will go a long way toward meeting your RTOs and customer requirements, so that you will be able to start using the application while continuing the verification of the recovered data.

John McWilliams, JH McWilliams & Associates, Business Continuity Consultants


September 11, 2009  6:30 PM

Distributed Enterprise Data Risks



Posted by: Beth Cohen
Business Security, Business Value, cloud computing, Distributed systems, enterprise architectures, Supply chain

Question:  With all the talk about data on the cloud, is it possible to build a distributed enterprise architecture that addresses the issues of security and cost effective delivery without compromising business integrity?

For example, let us say you are relying on a major retailer’s supply chain system for inventory control and tracking.  The retailer represents 60% of your annual sales.  They have intimate knowledge of all your costs and are squeezing you to cut your overheads further.  It almost looks like they have better business intelligence tools than you do.  You are uncomfortable with the relationship, but are afraid pulling out will have disastrous effects on your core business and profits. The board is nervous and Wall Street is not treating your stock price kindly.  Too many companies are finding themselves in exactly that situation as they find they are required to share more data with their business partners.  Yes, there are cost efficiencies to be found by taking this approach, but there is also the substantial risk of loss of control.

Data integrity, security and confidentiality have long relied on a combination of network and application based security.  As long as the data was secured on local systems using role-based account access combined with strong firewalls, the thinking was that corporate data was well secured.  As enterprise architectures get more complex and the supply chain more integrated, the data is increasingly stored in massive data warehouses and SOA’s.  To add even more complexity more enterprises are using the cloud as a way to augment their internal systems or sharing information with their business partners.  Data is increasingly spilling out to the cloud with little or no thought given to the security implications for the enterprise.  With the recent news about credit card fraud and identity theft on a massive scale, companies are and should be worried about protecting and securing their data.

At the basic level that means that companies need to understand where their data resides, who is using it, how they are using it and most importantly, why are they using it.  Some of the many security issues that the new distributed architectures might mean to the enterprise include such questions as:

  • Just what does data security mean in new contexts where you no longer have full control over the systems?
  • How is responsibility for data integrity and confidentiality assured if there are multiple parties involved in the chain of authority?
  • Do private clouds avoid or solve the problem, or do they make it more complex to manage as companies increasingly have to interface with business partners and customers on the cloud?
  • What types of architectures and mechanisms can be implemented through the systems and to assure full data integrity and confidentiality?
  • What are the best approaches to protecting the most sensitive data, particularly in the face of increased regulations and audit requirements?

About the Author

Beth Cohen, Luth Computer Specialists, Inc.


September 3, 2009  4:00 PM

Agile – The next big thing?



Posted by: Beth Cohen
Agile Methodologies, Business Value, IT Innovation, Scrum, Software development

Question:  What is behind claims that agile methodologies can increase software development productivity 10-100 times over traditional approaches?  Is this for real?

I just spent a week with a wildly enthusiastic international crowd of 1400 agilists attending August 2009 Agile Conference in Chicago.  As far as they are concerned, agile is set to become the standard development methodology in a few years.  I agree that there is much merit to what the agile community is saying.  Certainly, better communications between product owners and developers is always desirable, daily meetings and the idea of breaking the work into short manageable chunks called iterations are bound to improve any project’s velocity.  But I am skeptical of any claims for such dramatically increased productivity.

If you dissect what the agile folks mean, the high productivity numbers become suspect.  For example, one case study involving a Danish software company looks great at first glance, but looking more closely at the methodology, each iteration requires the work be pre-staged so that it is ready for the development effort.  All the pre-staging is magically not counted.  By breaking the work into smaller chunks and working closely with product owners, there is less wasted effort in building unwanted features.  This is all true, but to call the abandoned features unproductive is somewhat disingenuous. Indecisive management is a fact of life and going agile is not going to fix it.

Unfortunately, I also see agile software development quickly getting a reputation for creating new ways to overwork already over burdened knowledge workers.  It is all well and good that the agile principles are based on 40 hour work weeks, but so are the PMI (Project Management Institute) recommendations.  We all know how well those are adhered to.  The Scrum folks even have the audacity to call their iterations sprints.  You cannot run a project marathon as a series of sprints without serious burnout.  Since the developers on the team participate in work estimates, there is even more pressure to blame the workers if they fail to meet projections that are unrealistic to begin with.  At the conference, one session on metrics suggested that the team not share information on team productivity with management in case the numbers were misconstrued.

In conclusion, I find much in agile methodologies attractive and just plain good common sense.  However, any claims that seem to be too good to be true, should be viewed with skepticism.

About the Author

Beth Cohen, Luth Computer Specialists, Inc.


August 17, 2009  9:00 PM

Widgit Company – A Cloud Security Parable: Part 1



Posted by: Beth Cohen
Business Value, cloud computing, IT security, Privacy protection, vendor relations

Question:  Everyone is singing the praises of cloud computing, or at least all the vendors who are trying to sell services.  Just how safe is my confidential data on the cloud anyway?

To put cloud computing business security risk in concrete terms, I will tell you the parable of the Widget Company and Cloud Computing. Has anything like this happened to you?

Once upon a time, Widget Company, a $300 million dollar global company in the plastic widget business, decides to outsource their Oracle ERP application platform to Cloud Co., a cloud vendor who provides on-demand Oracle database services.  The CFO encourages the board to approve the cloud outsourcing project because it is projected to reduce support costs for their Oracle application by 20%, allowing the company to grow while avoiding an investment in a large, new and very expensive Oracle system.  The board signs a two year contract for services with the agreement that the cloud vendor is responsible for paying the annual Oracle maintenance contract.  Both the legal and finance departments’ review the contracts and give their blessings.

At first everything seems to be working and management is pleased with their decision.  Then reality sets in.  After three months, users increasingly complain server access is slow.  Cloud Co. responds to the complaints by first informing Widget’s IT department that their DSL Internet connection is probably not large enough for the anticipated user load, so they upgrade to a higher speed connection that increases their network connectivity costs by 30%.  When the increased bandwidth still does not fix the problem, Cloud Co responds by applying a patch recommended by Oracle.  After the installation of the upgrade, Widget Company finds that one of their mission critical applications is no longer compatible with Cloud Co’s offering and several months of customer data is lost due to the problems.  Oracle claims no responsibility because the application does not meet their development standards.  Productivity and staff confidence in the application plummet.  After the two companies’ lawyers argue for a while, Widget decides to pull out of the contract, which still has a year to completion.  Cloud Co. agrees to end the contract.

Widget Company’s management and IT department breathe a sigh of relief until they realize that the data backup from Cloud Co.  will take months of costly integration to re-implement on the old servers – which are fortunately still running, just in case.  However, Widget incurs additional costs when they discover they need to upgrade their Oracle licenses and pay for a year of back maintenance to get critically needed support.

Six months later Cloud Co goes out of business – Widget was not the only company unhappy with their services.  Eight months later, a Widget Company sales associate reports that their main competitor seems to have insider information about Widget’s customer list.  After a bit of legal discovery, Widget’s management discovers that after Cloud went out of business their assets were sold to a salvage company that resold the old backup tapes to a shady operation in the Ukraine, which then sold the customer list to their competitor.  At this point after spending over $500, 000 in sunk costs and with little hope of successful legal actions against the guilty parties, Widget’s management team is completely fed up, fires the CFO along with most of the IT department, and vows never to try cloud computing outsourcing ever again.

About the Author

Beth Cohen, Luth Computer Specialists, Inc.


July 15, 2009  7:00 PM

More Clouds with a Chance of Storms



Posted by: Beth Cohen
Business Security, Business Value, cloud computing, innovation, IT Innovation, Security, technology innovation

Question:  What exactly are the top security issues that cloud vendors need to address?

Somehow I am getting a sense of déjà vu on cloud security.  Don’t get me wrong folks, but the cow is already out of the barn.  After all, more than 69% of all consumer Internet users have used at least one cloud service in the past year and that doesn’t include the nearly 100% of all consumers who are using web mail services such as Gmail, Yahoo and others of their ilk.

On the other hand, businesses and enterprises are not rushing to jump on the cloud computing band wagon in the same kinds of numbers.  So what is holding companies back from taking the very real advantages that cloud offers?  We can argue that business requires a higher level of security and validation than the average consumer, but the simple answer is really a large dose of inertia, fear and doubt.  That is, all the usual reasons that businesses use as excuses to wait for the consumer products and service to prove their worth before committing precious corporate IT resources.

In a survey conducted by IDC in August 2008 and June 2009, concerns about security topped the list of challenges for 88.5% of the respondents, followed closely by performance (88.1%) and availability (84.8%).   Clearly security is a major impediment to a cloud architecture implementation for many organizations.  It will need to be properly addressed before cloud architectures will be fully embraced by the business community.

Cloud security issues can be divided into three major categories, business, regulatory and technical.  Business issues generally can be quantified as risks to the business in whatever form.  Major business concerns for the enterprise include:

  • Legal issues related to the control and protection of intellectual property and sensitive business information
  • The difficulty of establishing end to end business data validation
  • Regulatory issues related to data ownership and proper handling procedures
  • A perception of increased potential for data and business loss
  • Risk of reduced data or systems availability
  • Proper integration of the mix of secured data residing both in the cloud and on the internal corporate networks

The major global regulatory issues that influence technical and business decisions around cloud computing architectures include:

  • Rising consumer data protection laws around the world
  • PCI Compliance and the need to ensure end to end data protection
  • Banking regulations

It is clear that many of the business and regulatory issues can be addressed with properly secured cloud architectures, applications, networks and systems, but cloud and network security is quite complex.  It encompasses such diverse disciples such as networking, application development, database architectures and designs, hardware architectures, and systems design.  Many standard network security best practices developed for the enterprise are inadequate to handle the new cloud architectures.  However, by taking a network services approach to the architecture of cloud services, there are many advanced methods that can be used to address cloud security issues and allay most if not all of the business owners concerns.

About the Author

Beth Cohen, Luth Computer Specialists, Inc.


July 7, 2009  12:00 PM

Massachusetts Privacy Laws Compliance — Part 2



Posted by: Davidatkma
Business Security, compliance, IT security, Massahusetts privacy law, Privacy protection

Question: How can my organization establish a compliance program to meet the requirements of the Massachusetts Privacy Law 01 CMR 17.00?

In a previous blog post on the pending Massachusetts Privacy Laws we outlined what was required to comply with the regulations, which probably left you a bit worried and uncertain about your next steps.  To help clear any previous confusion, we will delve into more details about managing a compliance program, to help avoid the risk of random acts of non-compliance that might get you and your company into serious legal trouble.

Basically a compliance program is a management directed, budgeted, operational business function — think program management 101. The program should cover include at a high level all the standard operational or business functions:

  • Communications
  • People
  • Processes
  • Technology
  • Metrics

Communication: As with anything in business, communications can never be over emphasized, even if their importance is often overlooked.  The point is to keep the program on everyone’s mind.  Use standard communications tools such as: announcements, posters, emails, newsletters, surveys and quarterly compliance reporting.  To really drive home the importance, link compliance communications to employee performance so that the desire to stay current is personally beneficial.

People: Staff attitudes will determine the success of your compliance program; technology alone will not keep you data safe and secure.  Do not assume that everyone has a common understanding of compliance as you launch your program.  Staff training will help with common understanding and expectations, but you still need written job descriptions. Written roles and responsibilities are critical for setting expectations for meeting compliance objectives.  Identify a group coordinator role whose job it is to disseminate information and coordinate communications with the compliance program manager.

Processes: The processes needed for developing and deploying a compliance program include: writing policies, conducting risk assessments, establishing regular compliance activities, being ready for any compliance incidents and maintaining a planned events calendar.  Focus your business processes support compliance on the way your company uses and stores personal information (PI).  The policies should indicate that PI can only be stored in approved locations and that PI can only be used within approved guidelines.  Establish a hot line or question box so you can quickly respond to any compliance concerns related to a particular business practice.  Err on the side of caution.  It is far more prudent to delay a response to verify the need, then to respond rapidly with possibly inappropriate information and expose your company to a potential fine or lawsuit.

Technology: You have probably spent a great deal of resources maximizing information sharing to grow your company’s products and services.  So does that mean that you need to restrain this activity in the future?  Not exactly; compliance does not imply curtailing information sharing per se, but you do want to look at PI with a new pair of eyes to decide when, with whom and where you will share PI.  Being accountable does not mean you are restricted in your use of the information, it just means that you must protect and use it in a more aware manner.  To achieve this objective, you need controls.  We will visit this notion of controls in a future blog, for now controls=protections.

Compliance Metrics: Your compliance program is alive and changing on a minute by minute basis.  It is important to develop compliance metrics to monitor the success of your program.  The indicators are based on what you consider the most important factors to measure.  A few examples might include, the percentage of people trained in compliance, days since last review of access logs or incidents that have been noted.

Don’t just sweep compliance under the rug and hope it goes away – it won’t.  You will not reach compliance after a breach.  Be proactive to be safe.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: