The CICS Dynamic Scripting Feature Pack (optional product) seems to bring the best of both worlds – benefits of quickly developing scripted, Web 2.0 applications with simple and secured access to CICS application and data resources. The Dynamic Scripting Feature Pack basically embeds and integrates technology from WebSphere sMash into CICS TS run time and includes a PHP 5.2 runtime along with Groovy language support. Access to CICS resources is achieved using the JCICS APIs.
Dynamic Scripting Feature Packs can be used to:
CICS dynamic scripting is part of project Zero (ref http://www-01.ibm.com/software/htp/cics/scripting/). The idea of Project Zero is to provide a powerful development and runtime environment for dynamic web applications at the same time having the overall experience of being radically simple.
From a Project Zero developer’s perspective, the application is the server. Each dynamic scripting application is a standard (well-known) directory structure containing content within that structure. All you have to do is create an application, add application code, then start the application. The capabilities such as listening on a specified port and responding to HTTP, interacting with a database, using email, and so on are added to the application by adding dependencies.
In CICS Dynamic Scripting, all applications depend on zero.cics.core module. The zero.cics.core module provides much of the base functionality for a CICS-based Zero Application. The desired characteristics of the TCPIPSERVICE, URIMAP, PIPELINE, and JVMSERVER resources, can be specified in the zero.config and the zerocics.config files. If the application depends on additional features or capabilities that are related to HTTP, database interactions, Dojo support, email, and more, the dependencies are specified in the application’s ivy.config file in the application’s config directory.
For persons who are new to CICS , to effectively write and implement a Project Zero application using the CICS Dynamic Scripting Feature Pack points to be noted are:
To reiterate, CICS Dynamic Scripting can be used to:
Speed, Simplicity and Agility are the keywords in dynamic scripting. It is important to note that while traditional CICS applications are expected to handle large number of concurrent users and high volume with rigorous availability requirements, the dynamic scripting applications (that is developed in days or weeks with low cost – by anybody with basic scripting skills) are expected to be tactical with fewer concurrent users, with low volumes and fit for purpose availability requirements.]]>
COBOL is still the most widely used language in mainframe – both for online transaction processing as well as massive batch processing. Even minor improvement of performance of repeatedly executing COBOL programs directly provides savings of CPU and hence the costs. One of the simple things – and often neglected one – that would optimize COBOL performance is to use the right set of Compile options.
In quite a few performance tuning engagements, the culprit is that the compiler option (set in the JCL or the configuration tool) used by almost all programs having compiler options (mostly left to default) that bring down the performance. Even the veteran COBOL programmers, tend to ignore these and focus on programs alone while trying to improve performance. In this article, I would like to highlight the COBOL compiler options which impacts performance.
The OPTIMIZE compiler option can be used to improve the efficiency of the generated code. NOOPTIMIZE is the default. OPTIMIZE(STD) results in the following optimizations:
In case of OPTIMIZE(FULL) option, additionally it:
Note that OPTIMIZE requires more CPU time for compiles than NOOPTIMIZE, but generally produces more efficient run-time code. It is suggested that NOOPTIMIZE is used while a program is being developed, as frequent compiles would be happening and it also makes it relatively easier to debug a program since code is not moved.
With DYNAM option, all subprograms invoked through the CALL literal statement will be loaded dynamically at run time. NODYNAM is the default. DYNAM allows sharing of common subprograms, provides control of using the virtual storage (that can be freed using CANCEL statement), but with a performance penalty as the call must go through a library routine, whereas with the NODYNAM option, the call goes directly to the subprogram. Detailed information is available at http://itknowledgeexchange.techtarget.com/enterprise-IT-tech-trends/static-dynamic-linking-in-ibm-cobol/.
According to IBM, for a CALL intensive application, the average overhead associated with the CALL using DYNAM ranged from 40% to 100% compared to that of NODYNAM.
Using the FASTSRT compiler option improves the performance of most sort operations. NOFASTSRT is the default. With FASTSRT, the DFSORT product (instead of Enterprise COBOL) performs the I/O on the input and output files named in the SORT . . . USING and SORT . . . GIVING statements.
One program that processed 100,000 records was 45% faster when using FASTSRT compared to using NOFASTSRT and used 4,000 fewer EXCPs.
XMLPARSE(XMLSS) option (the default) selects the z/OS XML System Services parser as against XMLPARSE(COMPAT) uses the built-in component of the COBOL run time. While XMLSS provides additional capabilities, at present COMPAT option is found to be faster by 20-108%. But it is important to note that as IBM would focus more on XML Parser, the performance difference is most likely to get lower.
THREAD option that enables multi-threading in COBOL and can be used in a non-threaded application (ref. http://itknowledgeexchange.techtarget.com/enterprise-IT-tech-trends/multithreading-in-cobol/) results in runtime performance degradation due to overhead of serialization logic that is automatically generated. NOTHREAD is the default.
ARITH option that allows controlling the maximum number of digits allowed for decimal numbers. ARITH(COMPAT), the default, allows the maximum digits of 18 (which should serve well for most requirements) while ARITH(EXTEND) allows up to 31.
ARITH(EXTEND) causes performance degradation for all decimal data types because of larger intermediate results. The performance impact on an average is 16%, while for programs with heavy use of decimals, it could be as high as 40%.
AWO option implicitly activates the APPLY WRITE-ONLY clause for all physical sequential, variable-length, blocked files (irrespective of whether it is specified in the program). NOAWO is the default option. Using the APPLY WRITE-ONLY clause makes optimum use of buffer and device space. With APPLY WRITE-ONLY specified, the file buffer is written to the output device only when the next record does not fit in the unused portion of the buffer. Without APPLY WRITE-ONLY specified, a file buffer is written to the output device when it does not have enough space for a maximum-size record.
According to IBM, a program using variable-length blocked files and AWO was 86% faster than NOAWO (as the result of using 98% fewer EXCPs to process the writes).
BLOCK0 option changes the default for QSAM files from unblocked to blocked thus gaining the benefit of system-determined blocking for output files. NOBLOCK0 is the default. BLOCK0 is applicable for each file that meets all of the following criteria:
AWO might apply to more files than it otherwise would, if BLOCK0 is also specified (as AWO applies only for blocked variable-length records). One program using BLOCK0 was found to be 88% faster than using NOBLOCK0 (using 98% fewer EXCPs).
Note that specifying BLOCK0 for existing programs might change the behavior of the program – especially for files opened as INPUT without block size.
NUMPROC(PFD) improves the performance of processing numeric internal decimal and zoned decimal data. With NUMPROC(PFD), the compiler assumes that the data has the correct sign and bypasses the sign fix-up processing. But use this option only if your program data agrees exactly with the following IBM system standards.
Note that NUMPROC(NOPFD)is the default – and recommended if the numeric internal decimal and zoned decimal data might not use proper signs (especially if the program has to process external data files). Also note that NUMPROC(NOPFD) or NUMPROC(MIG) should be used if a COBOL program calls programs written in PL/I or FORTRAN.
NUMPROC(PFD) – that can provide performance benefit between 5-20% – is advisable for performance sensitive applications after ensuring that the necessary conditions are met.
TRUNC(OPT) is another performance tuning option for performance sensitive application and should be used only when the data in the application program conforms to the PICTURE and USAGE specifications.
While the above points are related to runtime efficiency, the following two options are worth noting in a development environment:
With judicial use of the compiler options – suited for the given environment – performance benefits can be achieved without modifying the programs.]]>
As rightly pointed out by a Forrester report, “improving perception of EA” is the key challenge and the common goal of Enterprise Architects. While developing an Enterprise Architecture Blueprint itself is a commendable job, it is only part of the job done. The real uphill task is to make the blueprint a useful one and this is where, “Architecture Governance” that has become the byword for ensuring effectiveness of Enterprise Architecture comes in. Today, there are multiple definitions, frameworks and tools available for establishing Architecture Governance.
Of the available definitions, the Open Group definition is quite comprehensive. “Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level. It includes the following:
While it is good to have detailed frameworks, in my opinion, in most organizations lots of effort is spent on establishing the hierarchy, roles, responsibilities, reporting structure, communication structure, periodical meetings and of course debates. In effect, while it started as a way for EA management, “Establishing Architecture Governance” has become a full-fledged project by itself.
I am sure that most would agree that the “typical Enterprise Architecture Governance Framework” sounds more like a “wish list” especially in terms of its expectation of roles to be played by the Senior Management and CxOs. The ambiguity of the metrics that get listed as the means to measure effectiveness of EA is another thought-provoking area. For example, the metrics of “number of changes made to the EA blueprint” – can be negative in the sense that the EA blueprint originally created was not up to the mark or really positive that the EA blueprint is being widely used and hence the large number of feedback and/or that the EA team is open-minded enough to accept comments and make changes.
In any case, in trying to establish “The Right EA Governance Framework”, we seem to be losing sight of the original intent namely “Getting Value out of Enterprise Architecture” (this is not about establishing the benefits of EA, but about actually reaping these benefits).
Simplifying the EA governance is the key to realizing its purpose and some of the points to be noted are:
With the explosion in the 3rd reactor of the Fukushima Daiichi Nuclear Power station, the fears of spread of nuclear radiation in Japan have increased. After effects of nuclear radiation remain for a long time and the possibilities of it spreading to neighboring countries cannot be ruled out. So tried to find out more details on Nuclear Radiation and their impacts – and sharing it in the blog (though it has nothing to do with IT).
In 1986 Chernobyl Nuclear disaster, where in addition to almost immediate death of thirty workers, a few thousand deaths happened due to radiation (according to UN). Another sign of damage on health is that about 6,000 people aged under 18 at the time of Chernobyl had developed thyroid cancer – usually only affecting older people. These are mainly attributed to the point that the accident was discovered more than a day after the explosion. Another sad factor is that Research related to Nuclear Health Hazards are not happening in full force – and in some cases totally dropped due to lack of funds.
Before going to specifics, thought it would be a good idea to get some details on Nuclear Radiation. Nuclear radiation occurs from hundreds and thousands of unstable atoms. When electromagnetic waves are energized enough to detach electrons from the atoms or molecules, ionizing radiation takes place. Radiation poisoning, also called as radiation sickness or creeping dose, happens when an ionizing radiation happens and the exposure to it damages organ tissues. The clinical name of radiation poisoning is Acute Radiation Syndrome (ARS). Acute radiation syndrome is divided into three: hematopoietic, gastrointestinal and neurological/vascular. Treatment of acute radiation syndrome is generally supportive with blood transfusions and antibiotics.
Exposure to all levels of nuclear radiation is not fatal and this is measured by rems or mSv (where 1 Sv = 100 rems). The following table gives the crux of the details on radiation effects available at http://scienceray.com/biology/human-biology/harmful-effects-of-nuclear-radiation/.
|below 1Sv||Typically it is unnoticeable||Very negligible|
|1 to 2 Sv||Mild symptoms start to occur within the first 3 to 6 hours of exposure to nuclear radiation and may last for a day. After a few days the symptoms start to reappear as the blood cells die, but do not get replaced; this symptom lasts up to 4 weeks.||The intestinal tract lining that gets damaged resulting in nausea, diarrhea and blood vomiting.In some cases, sperm forming tissues are damaged.As blood cells die, this may result in loss of appetite and fatigue.|
|2 to 4 Sv||Symptoms – like nausea – start to appear within the first hour of exposure lasting couple of days.Severe symptoms – like hair loss, fatigue, diarrhea, kidney problems, hemorrhage, etc. -start to reappear after a week or so.||The intestinal tracts or the hematopoietic tissues get damaged.Decrease in white blood cells that makes one vulnerable to infections; when infected it is tough to overcome it.There is also a possibility of mortality.|
|4 to 6 Sv||Symptoms start to appear within the first half-an-hour of exposure and may last up to 2 days.The symptoms reappear after a brief period of 7 to 14 days.||It is the blood tissues that get affected in this range.Infections and hemorrhage are the major reasons for mortality and this may occur between 2 and 12 weeks from exposure.|
|6 to 10 Sv||The symptoms start to appear within the first 15 minutes of being exposed and last up to 2 days.The symptoms reappear after a brief period of 5 to 10 days.||The bone marrow and the gastrointestinal tissues get damaged.Recovering back to normal after being exposed to this level of radiation is very minimal and complete recovery is not guaranteed.The mortality rate is too high due to reasons like infection and internal bleeding.|
|greater than 10Sv||Very severe and a person may collapse within few hours of exposure.||Very high mortality due to causes including intestinal tract damage, gastrointestinal problems, severe diarrhea, low blood pressure, etc.|
In addition to the direct health impacts, radiation exposure can also increase the probability of developing some other diseases – like cancer – though these are not included in the term radiation sickness. It is feared that Nuclear Radiation might also take its toll of vegetation and wildlife – like a travelling death.
Nuclear Radiation ill-effects can be reduced by:
Health impact in Japan is kept low – as it is detected early and acted upon
The World Health Organization (WHO) has said the public health risk from Japan’s atomic plants remained “quite low.”
Lennart Carlsson, director of Nuclear Power Plant Safety in Sweden said “The wind direction is right for people in Japan. It’s blowing out to the Pacific, I don’t think this will be any problem to other countries.”
Malcolm Crick, Secretary of the U.N. Scientific Committee on the Effects of Atomic Radiation said. “The radiation levels were detectible but in terms of human health it was nothing.”
The key reasons for the Health Impact being low are:
Other Impacts do exist
For nearly four weeks, Japanese emergency crews have been spraying water on the damaged Fukushima nuclear reactors, a desperate attempt to avert the calamity of a full meltdown.
It is estimated that already about 15 million gallons of highly radioactive water has accumulated. The immediate problem is how to store all that water until the reactors and the spent fuel pools are brought under control. It seems that Tepco has released a couple of million gallons of the least contaminated water into the ocean this week, with the expectation that its radioactive elements would be diluted in the ocean’s mass. If repeated, such moves would be vigorously opposed, especially by Fishermen whose livelihood would be affected. Also international law forbids from dumping contaminated water into the ocean if there are viable technical solutions available down the road.
Ultimately, the high-level radioactive substances in the water will have to be safely stored, processed and solidified. Several methods – evaporation, solidification, vitrification – are discussed. Experts seem to be strongly disagreeing on the safest mode of disposal of the highly radioactive water.
Victor Gilinsky, a former member of the Nuclear Regulatory Commission and longtime advisor on nuclear waste, says that the problems facing Japan are greater than decommissioning eight reactors at Hanford, the most highly contaminated nuclear weapons site in the U.S. as there are no meltdown and possibility of workers contamination in Hanford.
Exposing the material to open air could allow radioactive iodine and other volatile substances to blow off the site, adding to the remote contamination that is already spreading dozens of miles from the plant.
“If the contaminated water has relatively high tritium or tritiated water concentration, then treatment could be more complicated,” said Joonhong Ahn, a nuclear waste expert at UC Berkeley.
In any case, the process of cleaning up the water have to be handled on a specially designed industrial complex could take hundreds or even thousands of workers to work for many years, even decades, to complete. The cost could run into the tens of billions of dollars. The high levels of ground contamination at the site are raising concerns about the viability of people working at the site in coming decades.
Current Medical Trends
It seems that the U.S. government has allowed more than $500 million to invest in new therapies as part of two laws passed in 2004 and 2006, the Project BioShield Act and the Pandemic and All-Hazard Preparedness Act, Spoonful of Medicine reports.
Some of the encouraging medical trends related to Nuclear Radiation are:
Some researchers are optimistic that the Japan Nuclear reactor breakdown will prompt governments to raise the funds needed to carry out Research on Radiation Effects and other related studies.
Let us hope that everyone involved would see the value of the age-old adage “Prevention is better than Cure” and work towards making Nuclear Energy safer and take the best step forward in ensuring prevention of such disasters.]]>
Traditional Business Process Management (BPM) focuses on activities, the order and sequencing of the activities to solve a problem – more like an imitation of mass production in a factory. While BPM does serve its purpose, there is a greater need for automating and tracking unpredictable “cases” that do not follow a well-defined process. There are situations where not all the activities or the order of the activities are known before hand and needs to take the specific context into account to make these decisions. This is where “Case Management” comes in – for taming the untamed processes.
Case Management as term seems to have different definitions in different contexts as can be seen by the Wikipedia definition. In that sense, I tend to agree with one of the Forrester blog comment that says “Unfortunately, case management is a lousy term for a great idea”.
But, in domains like Health care, Insurance and legal domains is well understood. The following definition by the case management society of Australia in the health care context does sound relevant to what we are discussing: “Case management is a collaborative process of assessment, planning, facilitation and advocacy for options and services to meet an individual’s health needs through communication and available resources to promote quality cost-effective outcomes”.
Case Management is mainly about processing a case – that typically has a subject – say an individual (customer, employee, patient), an entity (business, government) or an event (security violation, fraud occurrence or system outage). The perspective of case management is to empower the knowledge worker to solve the problem by designing a flexible solution – including the ability to add tasks to the process – that by a case folder mechanism expose all the case information including documents and all the tasks that might be required to keep track and solve the business problem.
DCM differs from traditional BPM in the sense that:
According to Forrester, the drivers for the increased interest in case management include:
Case management offerings are designed to meet these needs with their ability to handle complex, long running business processes involving numerous stakeholders and spanning multiple systems (operational, content-centric, collaborative, analytics, etc.). Typical applications of DCM include exception handling, complaint or dispute management, contract management, lending applications, benefits enrollment, invoice processing, change request, and incident reaction.
DCM is also expected to pave way for Lean thinking about knowledge workers by “getting the right things to the right place at the right time in the right quantity to achieve perfect work flow, while minimizing waste and being flexible and able to change.”
According to Forrester’s evaluation of dynamic case management (DCM) vendors, Pegasystems, IBM, EMC, Appian, Singularity, and Global 360 lead with the most dynamic, visionary platforms. Pallas Athena, Sword Ciboodle, and Cordys are strong performers offering robust platforms that provide innovation in different DCM areas. ActionBase ranks as a Contender, filling the gap between email chaos and process-centric DCM. DCM as a new category of software will emerge as a distinct market by 2013.
DCM is already being viewed by organizations as an offering with a great potential by having goal-driven processes, leveraging the expertise of the knowledge workers, improving overall agility. In addition to integration issues, BPM has been struggling with the inability or actually conflicts between the business rule and exception case handling and in quite a few cases been frustrating to knowledge workers. For Knowledge workers, DCM now holds the promise of flexible end-to-end solutions that covers all aspects of a complex process and driving broader participation from relevant stakeholders.]]>
A major difference in de-duplication product offerings is related to when the de-duplication occurs: “In-line” (or “real-time” as the data is flowing – before it is written) or “Post-process” (after the data has been written). The benefits and drawbacks of Inline and post-processing is a much debated one.
In “Inline” process, the de-duplication hash calculations are made as the data enters the device in real time. If the block that is already stored on the system is spotted, the new block is not stored and instead just references to the existing block are added. Obviously, the benefit of in-line de-duplication is that it requires lesser storage as the data is not duplicated at all. But since it means the hash-calculations and lookups have to take place during the storage, the throughput could be lesser.
The process of de-duplication is CPU-intensive and involves processing every bit of information in a given volume or backup. So it is argued that if de-duplication occurs say during backup processing, the backup process would slow down. As a means of refuting this argument, vendors with in-line de-duplication have demonstrated that their product performance to be similar to that of their post-process de-duplication counterparts.
In “Post-process” de-duplication, the new data is first stored on the device and the analysis for de-duplication happens at a later time. Again the obvious benefit, is that the hash calculation and lookup is not required before storing the data ensuring that the storage performance is not impacted. The essential drawback is that the duplicate data is stored – albeit a short time before de-duplication – and may necessitate need for large storage space than actually required (with the worst case being the storage to be available is near full capacity).
Post-process de-duplication vendors response to this is that their solutions are designed such that de-duplication can be completed quickly and definitely before the next scheduled backup or vault to tape can occur (i.e., the extra storage requirement is not high enough to treat it as a major drawback).
Inability to do forward referencing is mentioned as a drawback for “inline” de-duplication.
The typical approach in de-duplication (also referred to as “reverse referencing”) is to eliminate the recent duplicate data and create pointers to the previously stored data. As an alternative, forward referencing (currently seems to be supported by only one vendor – SEPATON) keeps the most recent data and replaces the previously stored data with pointers to the most recent. Though it requires more work to do forward referencing, the benefit is in case of emergency, the most recent backup is in complete form for faster restore.
Some points in favor of “Inline” de-duplication (mostly as claimed from vendors DataDomain and IBM) are:
The following are good references while making decisions on inline versus post-processing de-duplication:
As expected, some of the vendors (NetApp, Quantum) are expected to provide the option to the user to be able to configure or toggle between inline and post-process de-duplication, depending on the available CPU, storage size and the workload.]]>
Data de-duplication with its promise of reducing storage capacity of backup environment by 95% (according to Forrester) has become fully Mainstream with more than 84% of the Gartner survey respondents currently using or planning to use it.
Data de-duplication (also called “intelligent compression” or “single-instance storage”) is a specialized data compression technique that reduces the storage needs by eliminating redundant data and storing only one unique instance of the data.
Unlike the standard file compression techniques, the focus of data de-duplication is to take a very large volume of data and identify large sections – even entire files – that are identical and store one copy of it. The standard example is an email system where there could be 100 instances of the same 1 MB file attachment and with de-duplication, only one instance of the attachment is actually stored thereby reducing a 100 MB storage demand to only one MB. The single copy that is stored could in turn be compressed by single-file compression technique and providing further storage reduction.
The Storage Networking Industry Association (SNIA) refers to data de-duplication as “The replacement of duplicate data with references to a shared copy in order to save storage space. This may be done at a whole-record level or at a sub-record level.” Refer to http://searchstorage.techtarget.com/definition/data-deduplication for detailed definition and techniques used.
With the lowering cost of hardware, it hardly seems to make any sense to be excited by any storage saving techniques. The need and significance can be understood from the points made by one of Forrester reports that claim that:
And with backup vendors with de-duplication technology claiming de-duplication ratios of 20:1 or more (upto 50:1), it is catching the interest of most IT professionals. According to Forrester, Data de-duplication along with Cloud as storage has the potential to really make the disk as cheap as Tape.
Forrester’s report dated July 2007 says “It does not expect tape to completely vanish for another five years at least, but we believe that firms will continue to shift more of their backups (as well as their investment) to disk as their first line of protection. Technology such as de-duplication will accelerate this shift”.
The storage and even non-storage related vendors definitely seem to be doing all they can to keep the trend. NetApp with its own NetApp de-duplication feature, EMC with their acquisition of Data Domain for De-dupe, IBM with its patented inline de-duplication technology and Dell’s intention to acquire Ocarina Networks for its content aware de-dupe and compression technology and Quest’s announcement to acquire BakBone Software with data de-duplication software are all set to make it the hottest trend.
Data de-duplication, provides cost savings, directly by lowering storage space requirements and indirectly by reducing power and cooling costs, network bandwidth costs and may also save software license costs. In addition to saving costs, it also carries other benefits like “longer retention periods”, “better recovery time objectives”, “reduced I/O” as well as “improved availability” making itself a definite option to be explored (if not done already).]]>
IPv6, while having lots of advantages, can be expected to bring back the old problems of Domain squatting, Domain name hijacking and phishing.
Domain Squatting (or Cybersquatting) is registering, trafficking in, or using a domain name with bad faith intent to profit from the goodwill of a trademark belonging to someone else. Typically, the cybersquatter after getting the domain name offers to sell it to the person or company who owns a trademark at a very large price.
One of the best defence against Domain Squatting is for the legitimate persons and companies to get the right Domain(s) early. Even if the decision to move to IPv6 totally can be delayed, it would be a good idea to get the domain names now.
The process is quite straight forward and inexpensive. A party wishing to register a domain name may do so by contacting a registrar or companies that have reseller agreements with a registrar. At the time of registration, the registrant provides the registrar with technical and contact information to be associated with the domain name, and enters into a registration Agreement with the registrar. The registrar then submits the information associated with the domain name to the registry, which maintains the authoritative, master database of all domain names registered in a particular Top-Level Domain.
Registrants may choose to transfer their domain names from one registrar to another. Such transfers are conducted according to the Inter-Registrar Transfer Policy (see http://www.icann.org/transfers/).
During this IPv6 transition, another disturbing trend “IP cyber squatting” is expected to come up. Organizations that have received IP addresses in large blocks, might instead of returning it, try to profit by selling the unused numbers to the highest bidder. ARIN (American Registry for Internet numbers) hopes to avoid the problem by encouraging organizations to make the IPv6 transition now and turn in unused IP addresses. According to ARIN, “Internet numbers are issued according to policies that say if you don’t have a need for IP addresses you should return them” and “if required an audit would be conducted to identify and get back the unused IP addresses”.
The possibility of selling the IPv4 for a large payout and still remaining the registered owner of the block in the whois and RIR databases may sound attractive (and even legitimate) to some. But any activity using the IP addresses you have registered, brings the responsibility of any activity using it to fall on your shoulders.
Trend Micro predicts that “As users start to explore IPv6, so will cyber criminals”. They also comment that regional TLDs (Top-Level Domains) that will introduce “Cyrillic characters in place of similar-looking Latin characters” as a means of phishing attack. Refer to http://www.mydigitalfc.com/news/phishing-attacks-becoming-more-localised-and-targeted-231 for some interesting information on phishing.
Domain name hijacking refers to the wrongful taking of control of a domain name from the rightful name holder. Detailed information of domain name hijacking and how it can be avoided is available at http://www.icann.org/en/announcements/hijacking-report-12jul05.pdf. The significance of Domain hijacking can be understood by what the report says:
Based on the findings and recommendations of the above mentioned report, ICANN seems to have changed the policies (available at http://www.icann.org/en/udrp/) to ensure speedy resolution of domain name disputes.
The general opinion is that in spite of the large number of such cases and their impact, enough has not been done to prevent these problems like domain name thefts and abuse and it is left to the individual users to take steps to protect themselves.
While planning for IPv6 (even if the decision to remain in IPv4 for some more time to come), it is advisable for organizations to have these aspects of squatting and hijacking in mind and take preventive actions.]]>