IBM Virtual Desktop involves an open architecture providing flexibility and supports virtual Windows as well as Linux desktops (including Ubuntu, Red Hat and Novell platforms); and a variety of storage, directories, peripherals and remote display protocols. IBM Virtual Desktop for Smart Business incorporates the VERDE product – a leading, purpose-built desktop management and provisioning solution from Virtual Bridges – integrated with the IBM Foundation for Smart Business.
Virtual Bridges is named a “Cool Vendor” in the Cool Vendors in Personal Computing, 2011 report by Gartner. According to Virtual Bridges, VDI Gen2 and VERDE are just about the coolest technologies on the market today. VERDE is an end-to-end desktop management solution combining virtual desktop infrastructure (VDI), offline VDI for disconnected and mobile use and remote branch desktop virtualization capabilities.
Gold Master Image provisioning model is the most important feature of the VERDE solution that reduces the number of images requiring management – reducing storage and maintenance costs as well as providing malware resistance to all sessions. With this model, only a few desktop images (Gold Masters) need to be created with the required OS and the applications that different classes of users need. Users always run a read-only copy of the latest authorized copy of the Gold Master Image with all their personal settings, documents written to a separate User Disk.
Each VERDE server is responsible for authenticating and authorizing users to Gold Images and running the selected virtual desktop sessions on the servers. Multiple VERDE servers can be part of a VERDE Cluster with a designated Cluster Master for hot stand-by. VERDE can scale up to ten thousand clustered servers or scale down to a single-server configuration. VERDE Cluster servers are completely stateless and use a Distributed Connection Brokering architecture increasing the scalability of the overall solution.
The VERDE Web-based monitoring console provides real-time visibility to all virtual desktop sessions running on VERDE cluster servers – grouped by user or server or based on type of Gold Image. The console also provides real-time server utilization metrics – lightly loaded and can run more VDI sessions (green), system load is near the recommended max (yellow), system is either at MAX or over the PEAK limit (red).
The VERDE client authenticates users and provides access to users’ desktop sessions running on VERDE servers. The VERDE Client can run on Windows, Linux, MAC workstations, Netbooks and PDAs. A user can launch more than one virtual desktop session simultaneously by running multiple client sessions. The client protocol is optimized to provide the best user experience based on the end user location [LAN, WAN or at a branch location].
Some of the other important features and capabilities of VERDE include:
Virtual Bridges points out that it is the first vendor to:
Virtual Bridges is recognized by CRN as the hot emerging vendor – ref. http://www.crn.com/slide-shows/channel-programs/226500092/10-hot-emerging-vendors-for-august-2010.htm.
As IBM points out, VERDE helps increase the adoption of VDI and making it mainstream by addressing the cost, complexity and coverage challenges traditionally associated with VDI technology. While the offline VDI, Windows as well as Linux support, and connectivity from almost any provides maximum coverage, the unified console reduces complexity and VERDE comes at a significantly less cost compared to traditional VDI without compromising on security.
Of course, any virtualization initiative is not complete without a mention of Cloud and yes, Verda does deliver desktop as a Public/Private Cloud Service.]]>
Mainframe operating system z/OS is a share-everything runtime environment that provides for resource sharing through its heritage of virtualization technology. z/OS gets work done by dividing it into pieces and giving portions of the job to various system components and subsystems that function interdependently.
The workload management (WLM) component of z/OS controls system resources, while the recovery termination manager (RTM) handles system recovery.
At any point in time, one component or another gets control of the processor – makes its contribution, and then passes control along to a user program or another component. The control typically gets passed when a job has to wait for information to be read in from, or written out to, a device such as a tape drive or printer.
As with memory for a personal computer, mainframe central storage is tightly coupled with the processor itself, whereas mainframe auxiliary storage is located on (comparatively) slower, and cheaper external disk and tape drives.
Typical z/OS middleware (between the operating system and an end user or end-user applications) includes:
System Address Spaces and Master Scheduler
Many z/OS system functions run in their own address spaces. The master scheduler subsystem runs in the address space called *MASTER* and is used to establish communication between z/OS and its own address spaces. Master initialization routines initialize system services, such as the system log and communication task, and start the master scheduler address space.
Batch processing is the most fundamental function of z/OS. Many batch jobs are run in parallel and Job control language (JCL) is used to control the operation of each job.
z/OS requires the use of various subsystems, such as a primary job entry subsystem or JES. An address space is created for every batch job that runs on z/OS. Batch job address spaces are started by JES.
Multiple initiators (each in an address space) permit the parallel execution of batch jobs. Correct use of JCL parameters (especially the DISP parameter in DD statements) allows parallel, asynchronous execution of jobs that may need access to the same data sets (a technique to use batch parallelism and improve availability)
There are address spaces for middleware products such as DB2, CICS, and IMS (referred to as “secondary sub-systems). Typically an automation package starts all tasks in a controlled sequence. Then other subsystems are started. Subsystems are defined in a special file of system settings called a parameter library or PARMLIB.
Workload Management (WLM)
WLM manages the processing of workloads in the system according to the business goals, such as response time. WLM also manages the use of system resources, such as processors and storage, to accomplish these goals.
WLM has three objectives:
1. To achieve the business goals that are defined by the installation, by automatically assigning sysplex resources to workloads based on their importance and goals (goal achievement).
2. To achieve optimal use of the system resources from the system point of view (throughput).
3. To achieve optimal use of system resources from the point of view of the individual address space (response and turnaround time).
Goal achievement is the first and most important task of WLM. Optimizing throughput and minimizing turnaround times – which come after that – are essentially contradictory objectives.
Optimizing throughput means keeping resources busy. Optimizing response and turnaround time, however, requires resources to be available when they are needed. Achieving the goal of an important address space might result in worsening the turnaround time of a less important address space. Thus, WLM must make decisions that represent trade-offs between conflicting objectives.
WLM is particularly well-suited to a sysplex environment. It keeps track of system utilization and workload goal achievement across all the systems in the Parallel Sysplex and data sharing environments.
A mainframe installation can influence almost all decisions made by WLM by establishing a set of policies that allow an installation to closely link system performance to its business needs. Workloads are assigned goals (for example, a target average response time) and an importance (that is, how important it is to the business that a workload meet its goals).
I/O and data management
The input/output architecture is a major strength of the mainframe. It uses a special processor to schedule and prioritize I/O: the System Assist Processor (SAP). This processor is dedicated to drive the mainframe’s channel subsystem, up to 100,000 I/O operations per second and beyond. The channel subsystem can provide over 1000 high-speed buses, one per single server.
Data management activities can be done either manually or through the use of automated processes. When data management is automated, the system uses a policy or set of rules known as Automatic Class Selection (ACSTM) to determine object placement, manage object backup, movement, space, and security. ACS applies to all data set types including database and Unix file structures.
Storage management policies reduce the need for users to make many detailed decisions that are not related to their business objectives.
Today’s z/OS provides a disk device geometry called Extended Address Volume (EAV) that enables support for over 223 gigabytes (262,668 cylinders) per disk volume in its initial offering. This helps many larger customers having the 4-digit device number limitation to begin consolidation of disk farms.
Intelligent Resource Director (IRD)
Intelligent Resource Director can be viewed as Stage 2 of Parallel Sysplex. IRD gives the ability to move the resource to where the workload is. z/OS with WLM provides benefits from the ability to drive a processor at 100% while still providing acceptable response times for critical applications.
IRD is not a product or component, but consists of three separate but mutually supportive functions.
1. WLM LPAR CPU Management – provides a means to modify an LPAR weight to a higher value in order to move logical CPUs to that LPAR that is missing its service level goal.
2. Dynamic Channel-path Management (DCM) – designed to dynamically adjust the channel configuration in response to shifting workload patterns.
3. Channel Subsystem I/O Priority Queueing (CSS IOPQ) – z/OS uses this function to dynamically manage the channel subsystem priority of I/O operations for given workloads based on the performance goals
Predictive Failure Analysis and Health Checker for z/OS
Predictive Failure Analysis (PFA) is designed to predict whether a soft failure (abnormal yet allowable behaviors that can slowly lead to the degradation of the operating system) will occur sometime in the future and to identify the cause while keeping the base operating system components stateless. PFA is intended to detect abnormal behavior early enough to allow you to correct the problem before it affects your business.
PFA uses remote checks from IBM Health Checker for z/OS to collect data the installation. The objective of IBM Health Checker for z/OS is to identify potential problems before they impact z/OS’ availability or, in worst cases, cause outages.
Next, PFA uses machine learning to analyze this historical data to identify abnormal behavior. It issues an exception message when a system trend might cause a problem – thereby improving availability by going beyond failure detection to predict problems before they occur. To help customers correct the problem, it identifies a list of potential issues.]]>
Recently one of our Unix experts – focusing on IT optimization – came to me to get an understanding of Mainframe and like most IT people who have not worked in Mainframe had quite a few obsolete notions. So I created a set of documents on Mainframe overview, on zOS (operating system) and CICS (the transaction processor) and sharing the same here.
Mainframe is a self-contained processing center, powerful enough to process the largest and most diverse workloads in one secure “footprint.” At the same time, Mainframe is also just as effective when implemented as the primary server in a corporation’s distributed server farm. There are different classes of mainframe to meet diverse needs of customers – Business Class and Enterprise Class.
Security, scalability, and reliability are the key criterions that differentiate the mainframe. Businesses today rely on the mainframe to:
Typically, the mainframe shares space with many other hardware devices: external storage devices, hardware network routers, channel controllers, and automated tape library “robots,” etc. Unlike in the past, now, the mainframe is physically no larger than many of these devices and generally does not stand out from the crowd of peripheral devices (earlier Mainframe used to have rooms for themselves but now they are just part of a data center).
Mainframe interfaces today look much the same as those for personal computers or UNIX systems. A business application is accessed through a Web browser, with the mainframe in the background.
It is now possible to run a mainframe operating system on a PC that emulates a mainframe. Such emulators are useful for developing and testing business applications before moving them to a mainframe production system (can be useful in cost savings).
Most mainframe workloads fall into one of two categories: batch processing or online transaction processing, which includes Web-based applications. Today’s mainframe can run standard batch processing such as COBOL as well as batch UNIX and batch Java programs.
A mainframe can be the central data repository, or hub, in a corporation’s data processing center. For example, centralizing the data in a single mainframe repository can save customers from having to manage updates to more than one copy of their business data, which increases the likelihood that the data is current.
Mainframe – Hardware
Mainframe hardware consists of processors and a multitude of peripheral devices such as disk drives (called direct access storage devices or DASD), magnetic tape drives, and various types of user consoles;
o The term Box may refer to the entire machine or model; it is an expression used due to its shape. Mainframe systems today are much smaller than earlier systems-about the size of a large refrigerator. The mainframe’s power consumption today is 0.91 watts per MIPS and is expected to decrease with future models.
o The abbreviation CEC, pronounced keck, is for the Central Electronic Complex that houses the central processing units (CPUs).
o Central processor complex or CPC refers to the centralized processing hub that contains the processors, memory, control circuits, and interfaces for channels.
o All the processors (S/390 or z/Architecture) present in the CPU are referred to as processing units (PUs). The PUs are characterized as CPs (for normal work), Integrated Facility for Linux (IFL), Integrated Coupling Facility (ICF) for Parallel Sysplex configurations and so forth.
o A channel provides an independent data and control path between I/O devices and memory. Today, the largest mainframe can have over 1000 channels. A channel can be considered a high-speed data bus. Todays mainframe use ESCON (Enterprise Systems CONnection) and FICON (FIber CONnection) channels.
o Channels connect to control units. A control unit contains logic to work with a particular type of I/O device – printers, tape drives etc. Today’s channel paths are dynamically attached to control units as the workload demands – providing a form of virtualizing access to devices.
o Control units connect to devices, such as disk drives, tape drives, communication interfaces, and so forth.
o Sharing of I/O devices is common in all mainframe installations. A technique used to access a single disk drive by multiple systems is called multiple allegiance. Multiple paths to a device allows for effective disk sharing (across multiple servers) which in turn can provide improved performance and availability.
The IBM mainframe can be partitioned into separate logical computing systems. System resources (memory, processors, I/O channels) can be divided or shared among many such independent logical partitions (LPARs).
For many years there was a limit of 15 LPARs in a mainframe; today’s machines can be configured with up to 60 logical partitions. Practical limitations of memory size, I/O availability, and available processing power usually limit the number of LPARs to less than these maximums. Logical partitions are, in practice, equivalent to separate mainframes. Right sizing is the key when it comes to partitions. It can be used to control usage of resources, improved security, availability etc.
z/OS – widely used mainframe operating system – system is a share-everything runtime environment that provides for resource sharing through its heritage of virtualization technology. z/OS gets work done by dividing it into pieces and giving portions of the job to various system components and subsystems that function interdependently.
A z/OS system usually contains additional, priced products that are needed to create a practical working system. IBM refers to its own add-on products as IBM licensed programs – and comes at a cost. Also independent software vendors (ISVs) offer a large number of products with varying but similar functionality, such as security managers and database managers. The typical products include:
Even the System Display and Search Facility (SDSF) program that people use extensively to view output from batch jobs is a licensed program (looking at these licensed programs to see if they are really necessary – and also that there is no other ISV product with similar functionality available in the installation provides another opportunity for cost saving).
Besides z/OS, four other operating systems dominate mainframe usage: z/VM, z/VSE, Linux for zSeries, and z/TPF. The use of z/OS, z/VM, and Linux on the same mainframe is common.
It is also important to note that if there are multiple versions of the same software in an installation, the license costs are actually multiplied.
Mainframe provides Specialty engines -zAAP as a specialized Java execution environment, IFL for Linux – which enables off-loading specific work to separate processors. As attractive prices are offered for such processors, the overall total cost of ownership reduces by appropriate use of these processors. This also enables the general CPUs to continue processing standard workload increasing the overall ability to complete more batch jobs or transactions.
Consolidation of Mainframe
Data center consolidation initiatives have resulted in several smaller mainframes being replaced with fewer but larger systems.
Software license costs for mainframes have become a dominant factor in the growth and direction of the mainframe industry as mainframe software (from many vendors) can be expensive, often costing more than the mainframe hardware. Though Software license costs are often linked to the power of the system, yet the pricing curves favor a small number of large machines – replacing multiple software licenses for smaller machines with one or two licenses for larger machines is cost effective (While consolidating mainframes, the licenses for software (with 3rd party vendors) may have to be re-negotiated for cost savings ).
The relative processing power needed to run a traditional mainframe application (a batch job written in COBOL, for example) is far less than the power needed for a new application (with a GUI interface, written in C and Java). New powerful mainframes might need only 1% of their power to run an older application, but the application vendor often sets a price based on the total power of the machine, even for older applications.
As an aid to consolidation, the mainframe offers software virtualization, through z/VM. z/VM’s extreme virtualization capabilities and Linux on Mainframes make it possible to virtualize thousands of distributed servers on a single server. Consolidating distributed servers to mainframe can directly translate into significant monetary savings (IBM’s has conducted a very large consolidation project named Project Big Green to consolidate approximately 3,900 distributed servers into roughly 30 mainframes, using z/VM and Linux on System z. It achieved reductions of over 80% in the use of space and energy.).]]>
Safer Internet Day now in its eighth year is organised by Insafe – a European Commission funded project – to promote safer and more responsible use of online technology and mobile phones, especially amongst children and young people across the world.
It is celebrated in over 65 countries on the second day of the second week of the second month of the year – and this year it falls on 8th Feb (today!). The topic for 2011 is “our virtual lives” – online gaming and social networking (both most popular with youth) – with the slogan “It’s more than a game, it’s your life“.
Their website (http://www.saferinternet.org/) has quite a few interesting details. Though it is aimed at children, I believe most of what is said is equally applicable for everyone.
Helplines http://www.saferinternet.org/web/guest/helplines provide advices on how to deal with problems such as unwanted contacts, cyberbullying or any scary experiences while using online technologies. The website also has guidelines, tools, news and events focusing for online safety and responsible use of the internet and new technologies. For example, UK’s Safer Internet Centre has resource packs for primary and secondary schools that can be downloaded from www.childnet.com/safety/sid.aspx.
Do spread the message and empower the kids to make safer use of the Internet and the newer technologies.]]>
Gartner’s report on 2011 IT Predictions – http://www.gartner.com/DisplayDocument?ref=clientFriendlyUrl&id=1476415 – highlights that there would be significant changes in the roles played by technology in business, the global economy and the lives of the individual users.
The key theme of “ITs Growing Transparency” (as pointed out in the title of the report itself) is demanded and it requires IT to be more tightly coupled to governance and business control. The compliance requirement of IT expenses and financial goals will impact internal operations and the structure of contracts with suppliers and providers and more specifically cloud services providers.
The daily function of businesses, governments, economies and our individual lives depend on IT and this dependence gives rise to the prospect of technology being used to attack, disrupt and damage the nation states in which we live and work. IT could be wielded as a weapon with potentially catastrophic results. Defense reviews by government agencies across the world continue to highlight the growing risks of “cyber war,” and attacks against some nations have already occurred. The first prediction highlights this worrying trend.
The original theme of cost savings in IT is extending to make “Demonstrable support for revenue growth” as a primary IT objective. Governments and organizations of all sizes are obliged to re-evaluate the relationship between IT spending, operating budgets and revenue in much finer detail now aimed at understanding of how IT investments affect revenue and future prospects (no longer just a cost cutting measures).
Consumerization of IT is no longer a phenomenon to be contained or resisted. The attention of users and IT organizations to shift from devices, infrastructure and applications – and focus on information and interaction with peers. And this change is expected to herald the start of the post-consumerization era.
The following are the predictions per se:
1. ITs Global Role – By 2015, a G20 nation’s critical infrastructure will be disrupted and damaged by online sabotage.
2. Revenue Growth – By 2015, new revenue generated each year by IT will determine the annual compensation of most new Global 2000 CIOs.
3. Costs and Investment – By 2015, information-smart businesses will increase recognized IT spending per head by 60%. By 2015, tools and automation will eliminate 25% of labor hours associated with IT services.
4. External Assessments – By 2015, most external assessments of enterprise value and viability will include explicit analysis of IT assets and capabilities.
5. Accountability - By 2015, 80% of enterprises using external cloud services will demand independent certification that providers can restore operations and data.
6. Expanding Markets – By 2015, 20% of non-IT Global 500 companies will be cloud service providers. By 2015, companies will generate 50% of Web sales via their social presence and mobile applications.
7. User Productivity – By 2014, 90% of organizations will support corporate applications on personal devices. By 2013, 80% of businesses will support a workforce using tablets.
8. Society – By 2015, 10% of your online “friends” will be nonhuman.
Of the above, I believe points 2, 5, 6 and 8 are more interesting and relevant to the newer trends. Have tried to cull out the important points to be noted, especially related to these specific predictions.
Post recession, capital markets tend to reward companies that report organic growth in revenue (rather than cost cutting). Sustained period of economic recovery is impossible without Revenue growth from increased customer demand. CIOs wanting to make “information” just as important to their mission as information “technology” can expect many new demands in the new decade.
The following four IT-enabled initiatives are the ones with potential to deliver increased enterprise revenue:
Analytics is moving to the next level of maturity and skills that would help in uncovering trends and opportunities – and don’t miss the meaningful shifts in customer sentiments and preferences are the ones to be invested on. Understanding and explaining human behaviours – how people react to one another in different cultural settings – would help both business and governments. Gartner recommends that CIOs devote 50% of the R&D and training budgets toward funding social science education – sociology, anthropology, cognitive psychology and ethnomethodology – for staff.
Cloud services are highly risk prone and require a high level of security functionality and the fault-tolerant mechanisms of the providers are over blown. Data loss is a higher risk in cloud than the data security which is now focused upon.Each individual user cannot be expected to determine if the service meets the security, regulatory and business continuity requirement. But unless the buyers feel confident, cloud computing cannot reach its full potential.
Third party “certification” model with highly skilled risk assessors is the practical solution available. Existing certification like SAS 70 are not adequate and doesn’t provide evidence of security and data recoverability and mostly misused by suppliers today.
New cloud specific certification programs involving the US and European Union governments and cloud industry consortium is the step in the right direction. Gartner also points out that the need and benefits of standards like FedRAMP, Trusted Cloud Initiative, BCM standards such as BS-25999, ASIS SPC.1-2009 and NFPS will not become more apparent until there have been several prominent instances of unrecoverable data loss events (hoping that it doesn’t turn out to be the case).
In any case, it is advisable for organizations looking at cloud to not do away with traditional disaster recovery mechanism, such as offline backups, at least till the time that Cloud is more proven.
Prediction 6 is the most interesting and in a way highlighting a whole new world of opportunities. Cloud computing is removing the historical barriers for non-IT companies to provide IT related competencies – and these non-IT service providers may directly compete with IT organizations. Businesses will start understanding the principle that cloud computing is a means to deliver “IT-enabled capabilities” and not just “IT capabilities”. Hyperdigitization of industries like financial services, education, communications and media, government etc. would add fuel to this trend.
According to a consumer survey, shopping is the third-largest activity for consumers on the Web (and social activities form the eighth largest). By 2013, the installed base of web capable mobile phones and smart phones will surpass the ones of PCs and Laptops. Organizations are re-investing in their e-commerce capabilities (both B2B and B2C) to increase sales via SMS, mobile Web browsers and applications.
The mobile device has the most potential of any channel to provide “in context” offers to customers because of its access to identity (e.g., calendar), environmental (e.g., GPS location), process (e.g., wish list) and community (e.g., Facebook friends) information about the mobile device user. Organizations should move to a context-aware promotions model that leverages information about the mobile and social user.
The Web continues to evolve in the social dimension, where every website is becoming a social site, and every social site is evolving toward a social platform. Social media strategy involves several steps: establishing a presence, listening to the conversation, speaking (articulating a message), and, interacting in a two-way, fully engaged manner. Most efforts at social engagement are handled manually which is hard to scale. Some e-commerce sites have fully or semi-automated live chats, providing canned answers to questions, and redirecting as necessary to a human operator.
In 2010, large organizations embarked on systematizing the act of listening – monitoring social media conversations in blogs, social sites, forums, and more. Nigel Lack, a software developer created a global warming chatbot (a program that chats) and some users have conversations with it spanning dozens of tweets over a period of days.
Progress in artificial “intelligence” in the classic aspect of linguistic processing, semantic knowledge and logical inferences, and also in the area of “emotional intelligence” would make conversations appear even more natural. By 2015, efforts to systematize and automate social engagement will result in the rise of social bots – automated software agents that can handle, to varying degrees, interaction with communities of users in a manner personalized to each individual.
Today, average user of Facebook has 120 to 150 friends of which some have never met and this situation is treated as natural. A next step in the evolution of online interaction is to have software bots as friends. In some cases, users will be aware they are dealing with a bot, and will find this acceptable.
The conclusion seems to be “Virtual is Real!”.]]>
The term appliances bring to mind the electrical / mechanical appliances like Washing Machine and toaster we use at homes. The key characteristics of these appliances are that they are simple to use, reliable and are typically not serviceable by the owner. The same concept of these house-hold Appliances when applied to enterprise level, led to IT appliances. Compared to General purpose machines, the appliances are highly specialized and optimized devices designed to handle specific tasks efficiently and effectively.
The term “Information Appliance” is coined as early as 1979 by Jef Raskin who left Apple to form his own company Information Appliance Inc.
Though people do not tend to think explicitly of appliances in an Enterprise context, appliances have been in the Enterprise for quite a long time. Most enterprises today, use the special purpose routers from CISCO and Nortel and general purpose computers are rarely used for packet routing. These routers are nothing but Appliances that are physical and designed to do a single function efficiently and effectively. Load Balancers, Proxy caches, SSL accelerators are other appliances that have long been part of the IT infrastructure.
In an Enterprise context, a computer appliance is basically a self-contained IT system that can be plugged into an existing IT infrastructure to carry out a single purpose. The appliance’s purpose could be to provide additional processing power, storage, monitoring or security.
Network, Storage and Security are the areas where appliances are widely used at Enterprises level. Storage appliances perform storage-related functions including backups, storage management, license management, encryption, access control and availability management. A range of Network appliances are in vogue including network fax servers, router, back-up servers and network monitors. Variety of security appliances that offer firewalls, anti-virus scanning, content filtering, anti-spamming, intrusion detection, penetration testing, vulnerability assessment, remote authentication, VPN Gateway are heavily used by Enterprises.
These appliances are designed in such a way that they are decoupled and centralized and hence can be shared among many systems. They can be expanded, managed and optimized without any other system in the data center to be changed. In effect, the purpose-built nature of appliances translate to significant benefits like stability, ease-of-use, reliability, security, simplicity, ease of deployment and administration.
With success of these, another category of server appliances called the “compute” appliances that offload specific computing operations to a dedicated device came up. Java applications got re-hosted transparently to Java appliances that use techniques like pauseless Garbage collection, Optimistic Thread concurrency enabling optimal performance of Java applications. Similarly SOA appliances that simplify, secure and accelerate XML and web services deployments are widely used.
With mobile technology becoming all pervasive, the industry experienced the rise of handheld appliances used by mobile workers in various industries. These handheld information appliances were specially designed and are typically custom-built for the task they had to perform – leading to significant improvement in customer relationship and employee productivity.
Data Warehousing Appliances and Integration Appliances that offer an entire suite of functions – from both niche vendors like Greenplum, Vertica, Cast Iron as well as from established vendors like Oracle, IBM, Microsoft started flooding the market.
Data warehousing appliances comprised integrated set of servers, storage, OS, DBMS and software specially pre-installed and pre-optimized for Data Warehousing and offers the scalability, flexibility, workload management and other features required to support Enterprise Data Warehouse functions (EDW). Data and Process Integration appliances offer data and process integration between transactional systems or between transactional and report systems and are sophisticated enough to play the role of Enterprise Service Bus (ESB) due to their strong routing, mediation, transformation and protocol-switching capabilities.
With Appliances getting more sophisticated, they started including a customized operating system, database etc. running over specialized hardware. This trend is against the traditional wisdom of letting the application focus on the function and be able to run on any OS, DB or hardware. But this is the price that has to be paid for the optimization achieved. Also in reality, these appliances do not mean development of a new OS or DB. As Forrester points out, these machines are assembled from fairly standard parts; tweaks are made in such a way as to be optimal for the chosen set of work with lot of built-in redundancy so as to be self-contained. In addition, better security is achieved as the vendor pre-hardens the solution against known security vulnerabilities as well as the complete visibility of the appliance enabled the vendor to identify and fix security risks that arise from time to time.
The vendors tried to flood the enterprise with appliances in 2000; and again in 2007 offering everything from single function operations to completely integrated applications. In spite of all the advantages of appliances, Enterprise IT didn’t take to it in a big way – except in case of Infrastructure, or where specialized requirements on security or mobility that made appliances necessary.
The lack of Enterprises full-fledged support can be traced to multiple reasons – the idea of using hardware from various vendors for single purposes meant specific arrangements made for each; the non-serviceability was seen as a issue when consolidation / standardization were the themes; concern about possible vendor lock-in and potentially leading to overpriced appliances; hardware replacement costs in case of problems appearing to overthrow the cost advantage of using appliance; and lack of integrated management features across appliances from multiple vendors leading to a out-of-control data center.
As Hardware Virtualization matured, a new breed of appliances called “virtual appliances” which enable leveraging the benefit of an appliance without the hardware has emerged. These virtual appliances address many of the blockades to physical appliances and coupled with Cloud computing provide an enticing option to Enterprises. Appliances market which till now has been in a push mode – where the vendors are pushing their products, we can now expect that Enterprises would actively consider adopting appliances as part of their IT landscape.]]>