A paperless office was a dream of many an organization in the 1980s and ’90s. The simple word processor was followed by spreadsheets and other office automation software. It was soon realized that documents so created needed to be managed. So the ‘Document Management System’ (DMS) came into being and later as the World Wide Web came along, it dawned on people that the artifacts placed on the Web also needed to be managed; so the ‘Content Management System’ (CMS) followed. Now let us look at these terms and understand them.
A document management system (DMS) is a computer system (or set of computer programs) used to track and store electronic documents, and / or images of paper documents. It is usually also capable of keeping track of the different versions created by different users (history tracking). Now, the term has some overlap with the concepts of content management systems. DMS is often viewed as a component of Enterprise Content Management (ECM) systems and related to digital asset management, document imaging, workflow systems and records management systems.
In early 2001 when I put in DMS in our organization, it took us some effort to change the organization’s work culture; I had to persuade people to try out the electronic form of managing documents. When it succeeded, I was very happy and satisfied but soon realized that I had to expand my horizon of thought to consider other demands that had suddenly sprung up. Our marketing department wanted all their publicity material, advertisements in print, radio and TV, artwork and other creative material to be stored, catalogued and made shareable. That was a tall order and I had to struggle to find a solution. Later, as our website got loaded with content and our intranet started exploding with matter, it became apparent that these too needed our attention. I then got exposed to the developing area of content management.
An enterprise content management system (ECM) involves management of content, documents, details and records related to the organizational processes of an enterprise. The purpose and result is to manage the organization’s unstructured information (content), with all its diversity of format and location. The main objectives of enterprise content management are to streamline access, eliminate bottlenecks, optimize security and maintain integrity.
A CMS/ ECM provides a collection of procedures for managing work flow in a collaborative environment. The procedures are designed to do the following:
- Allow for a large number of people to contribute to and share stored data.
- Control access to the data, based on user roles (defining which information users or user groups can view, edit, publish, etc.)
- Aid in easy storage and retrieval of data.
- Reduce repetitive duplicate input.
- Improve the ease of report writing.
- Improve communication between users.
In my last organization, I had put in ECM/ CMS and included all forms of records, including normal office documents. We provided access to all content through the enterprise portal, and ensured security by centrally defining access rights accorded by the respective managers through a work flow process. Placing all important office records centrally had quite a few advantages; data could be shared in the group, security features could be enabled and data management in terms of safety, back-up etc. became much easier. People could access records at any time and from any location since it was not confined to a desktop or laptop.
It is however important that the purpose of putting in content management system in any organization should be clear and subsequent steps should proceed in the stated direction. Success of the system can be gauged by fulfillment of the objectives, such as information sharing, safety/ security of data, user convenience, etc.
These festive months are times to enjoy and rejoice. We do have fun; but there is something else to these festivities that give us the jitters. It is the greetings messages that run unhindered through our communication pipes and create those famed bottlenecks.
As a CIO, I have faced these situations often and I may have interesting stories to tell. It was in 1998 when I had just connected all offices of my organization on e-mail. In the initial period, reluctant users would send occasional mails to others only when forced to. We had then connected all offices using VSAT network with a meager bandwidth as it was expensive. But as Diwali approached, the network went numb and it took us a while to discover that it was that sudden burst of traffic (with greetings messages) that choked our network. Then came the New Year and the network started to blink again. People had by then learnt to create new cards using paint brush and other utilities; and those attachments were really heavy. Over the next two years, we took several measures to address this problem. For instance, I sent a mail to all requesting them to be choosy when sending greetings and to be measured by sending to those whom they knew rather than marking it to all. When that didn’t work, we had to block access to central groups for all except a few seniors. In order to bring a smile to those sad faces, we introduced a greeting cards application, asking people to choose cards from them instead of creating their own. We invited all the creative artists to draw out new cards with their signatures and add to the library.
The matter changed over the years as bandwidth got cheaper. With larger pipes the problem has perhaps become manageable or perhaps not quite so, as this traffic still poses a problem often. We know of the choke created on our mobile networks when people’s SMS messages flow with gusto. The mobile companies had to resort to higher tariff for such periods as a measure for controlling traffic.
The greetings conundrum does cause its own sweet trouble. Being a social activity, it makes difficult being harsh with people, and managements generally sympathize with their brethren. This is tricky; CIOs have to find a new way to address this problem. There are few tips that I can offer, though there could be other good methods adopted by some of us.
1. Advisory to users: Users sometimes need to be educated, made aware or simply told to exercise judgment. It may help sending a message to all asking them to send greetings to only those whom they know rather than marking them to all in the organization. I also used to mention of users’ complaints of unsolicited greetings messages from people not known to them.
2. Set an example with our conduct: I decided that I will not send mass messages, and will also not reply to such messages even if they come from close friends. I then persuaded employees in my department and many senior functionaries to observe such celibacy; and it worked.
3. Create a greetings library for internal usage: This reduces traffic and helps standardizing this ritual besides reducing the data traffic. Those who do not follow this practice can be talked to.
4. Greetings coming in or going out of the organization: Such incoming and outgoing messages also create a bottleneck. Though not much can be done in respect to our dealings with official contacts, we can request users to make use of birthday greetings sites, or their personal mail (yahoo/gmail etc.) for greeting their friends and other contacts.
Festivals are social events and we have to let people enjoy and greet each other. While such freedom is desirable, it makes sense to keep a watch on the computing and network infrastructure and ensure that it is available to the organization and people at large. That is the responsibility that a CIO is bestowed with, and he has to find a way to ensure that the systems function at all times.
I attended another conference last week, one of the many that dot our cities every week. Friday evenings are usually preferred for events by the vendors, media companies, and other event management organizations as that ensures better attendance. Nothing wrong with that; in fact, this keeps all the constituents happy. But let me discuss this specific conference that I attended the last.
This was a a full day conference in one of the 5-star hotels in the NCR area and it focused on ‘Data Center Strategies’. The event was to have several sessions during the day with a few of them running on parallel tracks. The topics covered included the setting up of datacenters, cabling solutions, air conditioning options, infrastructure optimization, server virtualization, cloud computing, etc. The audience consisted of CIOs from various companies in the NCR area though a few came from outside the region as well. I was invited to be a speaker in one of the sessions and so were a few other CIOs. The organizers had with them quite a few sponsors and some of them put up stalls to display their products or expertise. There was also an entertainment program set up for the evening followed by cocktails and dinner. In short, the event was planned to be a big affair and with right partners as sponsors, and they thought they had everything well worked out.
The event started off in the morning at the appointed time though the count of people was a much lower than what was planned. As the morning progressed a few others walked in but the number was still a bit on the sorry side. Some members preferred to step outside the room during the sessions to network with other fellow CIOs. The scene in the post lunch period however turned a bit healthy and the evening tea break saw the number building up to a decent scale. The exhibitors may have had a tough time as very few made their way to their stalls and the stalls exhibited a deserted look. As the sessions ended, a lot many arrived from nowhere and the people were in full flow to be a part of the entertainment show in the evening and the networking cocktails and dinner thereafter.
I am not sure if the organizers, sponsors, exhibitors, the speakers or the audience were really happy with the way the event went; however, I do not think the purpose of the conference was really served. As I set out to think about the conference and reasons for it falling short of a success, a few points emerged:
1. Did not clearly identify the target audience: The subject of datacenter strategies did not go down well with the CIOs. It was clear that most CIOs are increasingly moving towards outsourcing of their computing facilities. A lot many of them have hosted their servers externally and a few of them have already moved their applications to run as a PaaS or a SaaS service. Cloud computing model is also being seriously looked at by CIOs and so the direction is clear. In short, CIOs are moving away from fortifying their datacenters. The audience targeted therefore was not very appropriate and the seminar would have been better served if they had targeted service providers and staff of large datacenters.
2. The conference was stretched: A full day seminar was perhaps a stretch and CIOs obviously did not find it easy to take off a full day to attend. That perhaps explains why audience really swelled in the second half. I usually find the attention span of the audience wane after half a day of closed room presentation and discussions.
3. Need to understand the mood of CIOs: In all events it is important to understand the need of the audience and programs should be designed accordingly. Organizers, however, under pressure from sponsors, usually subject CIOs to long sessions including vendor presentations. Seminars nowadays happen by the dozen and CIOs attend only those where they find value; in other cases they come to network with their fellow professionals. Friday evenings are usually relaxed and a short seminar followed by entertainment and dinner is well accepted.
These were a few of my observations and I feel that such seminars would succeed if they identify the right target audience and design their programs to address their needs. Seminars these days are one too many and every seminar has to bring in something different to be able to attract audience.
With so much having been said about cloud computing and the great promise that it holds, it is but natural that companies examine it for feasibility and usefulness. While there is a tremendous vendor push on one hand, CIOs also face increased pressure from management, who favour a greater degree of outsourcing. However, progress on this front has been slow; there is a lot of smoke and less fire.
The software vendors/service providers are a major driving force pushing this solution; doing their bit to popularize it through advertisements, seminars, mass mailers and newspaper and magazine articles. However, deft handling might produce better results. I shall highlight this point using two examples:
a. Overhype: Though the voice has been heard far and wide, user perception is still hazy and the scene sure is cloudy. Cloud Computing is often touted as a solution for all ills and discouraging users. Vendors would perhaps do better by investing in creating proper awareness. When approached by vendors I have often asked them to study our set up and suggest an appropriate way forward; but the response has not been very encouraging.
b. Are the vendors ready: While the sales representatives do a good job of selling the proposition, they muddle through the next step when discussing details. Often, their tariff structures are incomplete as they haven’t considered various usage and default conditions put to them. Licensing has also been a problem since vendor policies with respect to conversion from the perpetual licensing model (as existing) to the revenue model in the cloud set up, is not clear. License fees for certain software, which are charged on the basis of CPUs used, are also a concern.
The way ahead for users
The user companies, I am sure, are accustomed to hype that gets created whenever new technologies are introduced in to the market. Over time as technologies mature, users also get wiser and slowly start evaluating, having more information at their command. Cloud computing as a solution, in my opinion, is just passing through this phase. Users are becoming more aware and getting into informed debates with vendors. However, they will do well to consider the following:
a. Evaluate and deploy the most appropriate solution: Cloud computing is here to stay. It is however important that we do not jump on to the bandwagon without adequate analysis and a proper assessment of organizational needs. Depending on the current IT landscape and the quality of solutions presented by the vendors, CIOs may find it more appropriate to move on to ‘platform as a service’ (PaaS) or ‘software as a service’ (SaaS) first and then move to the cloud at a later stage. Users should exercise their judgment and not get carried away.
b. Go ahead but exercise caution where necessary: Users often get stuck not knowing where to start. They say they have servers/storage etc. which are still functional and therefore an impediment to moving the applications that run on them. However, all resources will never become obsolete at once, and we will always have machines of different vintage. It is sometimes better to move a new application to the cloud or move a current application which suffers a bottleneck rather than making fresh investments. It is the first step that matters. And if that works, the further march gets so much easier.
In short, ‘cloud computing’ is an interesting journey interspersed with the usual roadblocks and challenges. Adequate planning and preparation however makes the journey easier and fruitful. Where there is a will, there is a way.
Having dealt with the basics of what cloud computing is, let us go further to understand more on this subject and talk about the main characteristics and deployment models. With so much of hype surrounding the topic, simple matters often get missed out, leaving us a bit skeptical of its utility. Many therefore stay unsure, wondering, whether this solution is appropriate for them or whether this is the right time for them to do so.
We can look at the advantages of cloud computing and examine if this would benefit us. Cloud computing exhibits the following key characteristics:
1. Reduction in costs: The cloud model (especially the public cloud) works on a shared delivery model, which enables sharing of resources and costs across a large pool of users and therefore brings down costs. Instead of companies making capital investments, they incur operational expenses. This lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing is on a utility computing basis and the user is charged only for the resources used.
2. Agility: Since resources are provided ‘on demand’, it brings in agility, thereby, improving users’ ability to re-provision technological / infrastructure resources. Therefore, it ensures scalability and elasticity in provisioning resources on a self-service and near real-time basis.
3. Device and location independence: Users are able to access systems using a web browser, regardless of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
4. Peak-load capacity management: In normal cases, we deploy infrastructure to meet peak load demands, and which lays idle at other times. This problem is taken care of by provision resources on demand. There are utilization and efficiency improvements for systems that are often only 10–20% utilized.
5. Reliability: Service levels are assured through the SLAs signed and the service provider usually provides for multiple redundant sites, which makes well-designed cloud computing architecture suitable for business continuity and disaster recovery.
6. Performance monitoring: SLAs should cover performance monitoring services that the partner must provide. The user company can thus be relieved of this responsibility.
7. Security: The subject of security is often discussed and there are huge concerns especially on the public cloud model. In my opinion, security could improve due to centralization of data, increased security-focused resources, etc. Nonetheless, concerns can persist about loss of control over certain sensitive data, and the lack of security when on shared platforms.
Various models of deployment are possible and organizations have been using them for a while. Let us discuss two of them which are usually debated.
The public cloud: A public cloud is one based on the standard cloud computing model, in which a service provider makes the resources, such as applications and storage, available to the general public, over the Internet. Public cloud services may be offered on a pay-per-usage model. Since this is based on a shared services model, it really helps in bringing down costs.
The private cloud: Private cloud is infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally. It is the so-called private cloud, where companies, in effect, try ‘cloud computing at home’ instead of turning to an Internet-based service. The idea is that you get all the scalability, metering, and time-to-market benefits of a public cloud service without ceding control, security, and recurring costs to a service provider.
The private cloud model has, however, attracted criticism because users ‘still have to buy, build, and manage them’ and thus do not benefit from lower up-front capital costs and less hands-on management. Adoption of the public cloud is largely influenced by concerns of security and I am sure that as the concept matures and the security issues are addressed, larger number of organizations will be seen on the public cloud.
Cloud computing is a current topic and it sure will look bad on me if I stay away from this discussion. This subject is ever present in all seminars/ discussions/ debates and people swear by it. In a discussion on the television the other day, a panelist stated that this was the biggest thing ever to have happened to the IT industry and others nodded in acceptance. I do not know if they were really in agreement or wanted to look good to the audience, but the discussion went on espousing the cause.
Cloud computing is all the rage. The problem is that (as with Web 2.0) everyone seems to have a different definition. On the Internet, “the cloud” is a familiar cliché; but when combined with ’computing’, the meaning gets bigger and fuzzier. Some analysts and vendors define cloud computing narrowly as an updated version of utility computing — basically virtual servers available over the Internet. Others go very broad, arguing anything you consume outside the firewall is ’in the cloud’, including conventional outsourcing.
Articles in IT magazines and business magazines, advertisements and direct mailers from vendors to CEOs and Business Heads created quite a few embarrassing moments for me. Touting cloud computing as the ultimate fix to IT problems, they led the unsuspecting business managers to believe that the solutions to their woes was near. However, subsequent discussions with a few vendors left the matter unresolved. The truth was that the vendors themselves were riding on a wave and hence did not have their feet to the ground. They had some ‘overall’ models but no clear directions on how to put forth a comprehensive solution to the customer.
What is cloud computing
Now let us understand what cloud computing means in simple terms. I intend to make it easy to understand and not get into an all-encompassing definition. Let us say, “Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet)”.
Cloud computing provides computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Parallels to this concept can be drawn with the electricity grid, wherein end-users consume power without needing to understand the component devices or infrastructure required to provide the service. The concept of cloud computing fulfills a perpetual need of IT; a way to increase capacity or add capabilities on the fly without having to invest in new infrastructure, training new personnel, or licensing new software. Cloud computing speaks of a pay-per-use service in real time over the Internet.
How cloud computing works
Cloud computing providers deliver applications via the internet, which are accessed from a web browser, while the business software and data are stored on servers at a remote location. In some cases, legacy applications (running on a client-server model) are delivered via a screen-sharing technology, in other cases, business applications are coded using web-based technologies.
Most cloud computing infrastructures consist of services delivered through shared data-centers and appearing as a single point of access for consumers’ computing needs. Commercial offerings may consist of various models but should be built on the premise that the users pays for the services that he avails of, apart from the fixed one-time costs that may be charged in the beginning for set up. These contracts usually ask for commitment for a certain period but the user can insist on service-level agreements (SLAs), ensuring that the service providers delivers a minimum level of service.
This, in a nutshell, could be cloud computing. I will discuss more on it in my next update.
‘Make it large’ screams an advertisement on the television, obviously trying to draw attention of the viewers to sell its products. I am sure, viewers do understand the purpose of the advertisement, but then there is a subtle message that the clip conveys which gets lost in the cacophony of all the noisy pieces that follow one after another. The clip shows someone dissatisfied doing normal things and then deciding to risk doing something big and succeeding in the end. This is about a paradigm shift, of an effort to come out of the shell, breaking barriers to grow and realize his full potential. Friends, it is this message that we should absorb, get inspired with and use it in our professional life.
Life of the ordinary
We often get mired in routine tasks and trying to manage day to day operations. We get so consumed with this routine (of keeping the lights on) that we forget the larger purpose that we came in for. Of the little time that we have, we try to add an application here or there and keep ourselves in business. Small successes keep us happy and we try to make big of the small stuff that we accomplish. But when we go out to a seminar and hear someone speaking of big achievements, we feel shy and hide ourselves in the crowd so as to become ordinary and nondescript. That sure doesn’t give us happiness – we wish we had done something good to talk about. Small projects and extensions to the current systems we do often, but it is about doing something significant which brings about a difference to the environment that we are in.
Doing something big
Let me delve on few ingredients of such an effort:
Identification of a requirement – One needs to scan the business environment and seek out a business need that could change the fortunes of the company. This would come through discussions with the business heads or the CEO. Define the requirement and the projected outcome.
Planning – No large project is ever successful without adequate planning. So what is required is to lay down all steps necessary, resources required, possible risks and fall back options. The plan obviously requires concurrence of those involved in the project.
Review post implementation – After the initial applause, do pause to take a look at the impact that the move has created and whether the project has actually achieved what it set to achieve. There are many who miss out this step. The management or the end users are the real spokespersons and it is only when they speak out in happiness, we can consider ourselves to be successful.
Of courage and managing risks
However, doing anything big and significant requires courage. We have to break shackles, come out of our comfort zone and get ready to strain ourselves for a big battle. Success however is not guaranteed and therefore the move tests our risk taking ability. Low risk entails small rewards and greater the risks, larger are the gains. We take what is called a calculated risk i.e. be conscious of the pitfalls but prepare well enough to tackle them.
I have at times during my career refused to take the beaten path and embarked on some big ventures against the usual advice of colleagues and vendors. For example, doing a big bang ERP implementation, going ahead with supply chain automation even when there were very few examples of other companies doing so in our industry, attempting digital assets management, etc. The projects carried risks no doubt but when we succeeded all others wanted to be a part of it and claim that they also contributed.
CIOs design and manage complex information systems to help the company get better in terms of their efficiency and effectiveness. With time, as systems expand due to organizational growth and changed environment, they get unwieldy and it becomes difficult to keep a good grip over the systems. A need was therefore felt to have a process which could ensure delivery of good IT services to the organization at all times. We have seen challenges faced by CIOs in keeping up with the organization’s demands, and at the same time, dealing with technology-changes.
‘IT Services Management (ITSM)’ then came up as a formal methodology for managing the IT environment and services delivery. No one author, organization, or vendor owns the term ‘IT Service Management’; the origins of the phrase are unclear. There are a variety of frameworks and authors contributing to the overall ITSM discipline; and proprietary approaches are available too.
Let us first understand what ITSM is. IT Service Management (ITSM) is a process-based practice intended to align the delivery of information technology (IT) services with needs of the enterprise, emphasizing benefits to customers. ITSM involves a paradigm shift from managing IT as stacks of individual components to focusing on the delivery of end-to-end services using best practice process models. It has, to some extent, common interests with process improvement movement, for example TQM, Six Sigma, Business Process Management, and CMMI, frameworks and methodologies. ITSM may consist of a set of best practices, a natural progressive life cycle approach, focused on value generation and business outcomes, non-prescriptive and therefore easy to tailor and adopt.
Main components of ITSM
It would make sense to understand various steps of the process that one has to undergo for implementing ITSM. Let me describe them in brief here. They are drawn from methodology followed by the Quint Group.
Service Strategy – Vision: This is about understanding the business and aligning IT with business objectives. Therefore every stage of service lifecycle is driven by a business case.
Service Design – Blueprint: This consists of a document made after a detailed analysis which guides the design of architectures, running of IT services, putting in appropriate and innovative IT infrastructure solutions and services. This provides the right direction for delivery of various services to business.
Service Transition – Construct: This part focuses on the broad long term change management role and release practices so that risks, benefits, delivery mechanisms, and ease of ongoing operations of service are delivered. Therefore the matters like knowledge management, awareness, training, release and development management, service testing and validation, are considered adequately.
Service Operation – Provision: This stage focuses on delivery and control process activities like event/ incident management, request fulfilment, problem management, etc. By doing so, we achieve a highly desirable steady state of managing services on a day to day basis.
Continual Service Improvement – Enhance: Continuous improvement is integral to this process and therefore there should be an effort to identify process elements to bring about service management improvements.
If we are currently managing our processes well, we tend to get complacent and do not formalize the service delivery mechanism. We should institutionalize the processes by inviting wider participation including those from business so that IT stays focused on delivering business benefits. It may sometimes not be possible for us to develop the best practices and therefore some external help could be of value.
Large enterprises, as we know, operate on huge budgets and can easily afford good and senior persons as their CIOs. These companies run several projects and therefore have a staff contingent who manage the work. They can also engage consultants, service providers, and the likes, to assist them in their efforts.
Small companies, on the other hand, work under several constraints. It is not easy for them to hire good and senior persons to head their IT departments — for one, they can’t pay salaries that the senior persons demand, and secondly, the good and deserving candidates always prefer large and known organizations to small companies to further their careers. These small companies, though may have great ambitions, operate with limited budgets and have to be extremely careful with their spends.
As a result, these companies run small projects which take care of bare necessities, what we call as vanilla implementations, and these do not deliver any significant benefits. With the available resources, they do not have the stomach to try big initiatives. But some of the entrepreneurs, many of whom are the new generation businessmen, are ambitious and want to grow at a fast pace and think big. For them, IT becomes a drag, one which does not facilitate their growth efforts and becomes a constraint.
The way out
The only solution they can think of in such cases is to hire consultants. Consultants from known organizations, may often be expensive and work for short periods for the duration of the assignment, leaving the rest for the organization to manage. CEOs sometimes trust the hardware/ software vendor to provide a solution as some vendors increasingly claim to have a bouquet of tools and services to take care of all the requirements. This, to my mind, is risky and avoidable.
Appointing an advisor
The best case, in my opinion, is for them get hold of an advisor who could devote some time regularly to draw the right IT road map, help them in selection of technology and of implementation partners. By advisors, I mean senior professionals who are retired or those who have chosen to do some work on their own. Advisors can help guide projects to their logical conclusion and supervise the internal teams to ensure that they carry out work as per the defined schedule.
I have been advising two such companies and I find the arrangement to be of great advantage to them. They are assured that their IT program would be taken care of and they just have to concentrate on their business. I also find it very satisfying as I am able to bring in the best practices and the knowledge and processes of large enterprises to these smaller companies.
The CIO plays a significant role in his organization and helps in providing various technology solutions. He is often a part of transformational projects that provide a competitive edge to the organization.
The CIO in most cases is one who initially starts as a programmer or a systems analyst or a business analyst and slowly rises up in the organizational hierarchy or picks up another opportunity and then builds his career till he reaches the level of a CIO. There are of course cases of lateral entry and people from other disciplines moving in to take the position of a CIO. The CIO learns as he builds his career and gets better in terms of effectiveness. He becomes well-adept in understanding functional processes, systems analysis, project management, people management, organizational behavior, etc. He implements wonderful systems and makes an impact. In spite of his efforts he is sometimes not viewed very kindly by the management.
In spite of his efforts, at times, his work does not get the due credit from the management. Reasons could be many, however, in my opinion, he can try to enhance a few skills to create that impact. People may suggest that they attend workshops to develop their leadership qualities, management skills, etc; but here, I wish to deal with another factor which can be useful in making the CIO more effective.
A case for the CIO to pick up a different role
In his normal progression, as described above, the CIO develops into a functional head and like other functional managers acquires skill in his domain but has a narrow outlook. My suggestion is that he should, as a part of his career building exercise, consider parking himself with a consulting or an IT services organization for some time. This gives him an opportunity to look at organizations from the other side as he goes to user organizations and develops solutions for them. In the process he learns new skills which otherwise he would find difficult to acquire. Let me explain about the exposure that he gets and skills he learns.
1. Communication skills: As a consultant he is expected to convince the potential customer of his offering and this he does either through meetings (speaking and convincing skills) or by making presentations. He also gets better in the areas of persuasion and negotiation.
2. Writing skills: The engagement begins with his submitting a proposal to the client and then he has to follow it up with clarifications and justifications. His writing skills therefore get honed. When he becomes a CIO, he can use these skills for putting up proposals and justifications to the management and get them approved faster.
3. Management perspective: Consultants usually talk to the CEO or senior functionaries and take a note of their expectation and engage with them at defined stages of the project. By doing so they start looking at issues from the management’s perspective and this can be of immense help when they later assume the role of a CIO.
4. Project management skills: All consultants and service providers work on fixed timelines and costs. They learn how to draw time schedules & resource plan and have to monitor the progress on a regular basis. As we know any overrun on time or costs is viewed seriously. Project management skill therefore becomes a very useful ingredient of one’s role as a CIO.
5. Documentation: We all know that documentation of specifications, systems documentation, users manuals, policies etc. in user organizations are usually neglected or are poor. However for consultants / service providers this process is fully ingrained in their working. This can be a great advantage for such a CIO.
6. Understanding the customer: Whether at the time of selling their services, project stages or at completion, the consultant has always to be customer centric. For him his existence depends on keeping the customer happy. Now if he carries this perspective with him when he becomes a CIO, he will treat his internal customers with due care and will learn to build the right relationships.
Is it doable?
Well, I am sure, it is. After spending my initial five years with user organizations, I spent the next 10 years with a management consulting organization. What I learnt during this period was of immense value and at times I found myself doing things a little differently as a CIO than my peers in the industry. I was fortunate to have taken that step and I strongly advocate this strategy for career growth.