Enterprise Linux Log


August 2, 2007  1:30 PM

OpenSolaris users group meeting

ITKE ITKE Profile: ITKE

Just a few more comments about the OpenSolaris users-group meeting I attended recently.

First, I will say that I was impressed with the turnout and the overall presentation as put together by SUN. Clearly, SUN is trying to help the OpenSolaris community in a big way. Truthfully, I’m a little jealous of this, as I’m the group leader of the NY Metro POWERAIX/Linux users-group and I’ve yet to see a similar commitment on behalf of IBM to draw interest to our group.

Back to the meeting. There certainly was a lot of information presented and SUN clearly had hoped to articulate a vision of what their new world would like like. Unfortunately, most of their innovative vision was borrowed from Red Hat. The overall underlying message that most heard was, ‘if you can’t beat them, join them’. What they want to do is essentially copy Red-Hat’s two-system model; Enterprise and Fedora (one for pay, the other community supported), using that model for Solaris. Further, instead of trying to compete with Microsoft, they continue to be preoccupied with Linux and showing why they are better then them. This strategy did not work with Unix, as Sun clearly has not dethroned AIX or even HP-UX for that matter and we’re not certain why they think their new OpenSolaris model will dethrone Linux, or why they even feel they need to for that matter.

If I were SUN, I would just let the technology speak for itself, rather then try to reinvent other models.

August 2, 2007  1:28 PM

Asigra announces 64-bit agentless backup for virtual servers

ITKE ITKE Profile: ITKE

LinuxWorld vendor news:

Backup and recovery software vendor Asigra Inc. is targeting the increasingly virtualized data center with 64-bit agentless backup for virtual servers.

Announced on July 31st, 64-bit Televaulting for virtual servers offers N+1 grid-based architecture technology and allows system administrators to restore individual VMs without having to restart the entire physical server which could host serveral other otherwise unaffected virtual servers.

The pricing model offered by Asigra is very virtualization-friendly, based on the amount of data being protected and not on the number of virtual and/or phsyical servers. 64-bit Televaulting also boasts any-to-any restore capability, i.e. P2P, P2V, V2V or V2P, as well as hot backups.

Asigra’s virtual server backup solution also offers bare-metal recovery and backup of enterprise apps such as Exchange, SQL Server and Oracle, giving it good potential to be a part of an IT department’s disaster recovery plan. Also, when it comes to VMware, Asigra can be configured to back up only the VMDK file changes instead of the entire virtual machine every time a backup is performed.


August 2, 2007  12:35 PM

Splunk to announce general availability of Splunk 3.0

ITKE ITKE Profile: ITKE

LinuxWorld vendor news:

Splunk will announce the general availability of Splunk 3.0 this coming Monday at LinuxWorld. The 3.0 version of the IT search engine will include improved features such as search-based reporting, interactive summaries and filters, navigation capabilities and dashboards and more.

The beta version was available for trial from the Splunk Web site, and over 2500 users tried it out.  An early press release reported that many customer ideas from the beta have been incorporated into the official 3.0 release.

The new interactive reporting feature allows users to analyze logs and IT data in real time while moving back and forth between unstructured search and structured reporting. Specific reports, charts, searches or alerts can be personalized and viewed in a dashboard. Search language is more specific. Splunk 3.0 also includes a deployment server, 64-bit multi-processor support and scripted inputs, and the Splunk IT wiki, SplunkBase 3.0.

More information and the Splunk 3.0 release (available for free) at Splunk.com.


August 2, 2007  12:20 PM

Product Brief: ServePath will release Web-based multi-server system GoGrid at LinuxWorld

ITKE ITKE Profile: ITKE

LinuxWorld vendor news:

On August 7th, managed server provider ServePath will reveal GoGrid, a Web-based system for buying and launching multi-server Internet architectures.

GoGrid will enable businesses to launch servers and add or remove RAM without using customer service or support.

More information to come.


August 1, 2007  9:20 AM

Red Hat Enterprise Linux 5.1 beta announced

ITKE ITKE Profile: ITKE

Red Hat announced today the availability of the beta release of 5.1 (kernel-2.6.18-36.el5) for the Red Hat Enterprise Linux 5 family of products including:

- Red Hat Enterprise Linux 5 Advanced Platform for x86, AMD64/Intel(r) 64, Itanium Processor Family, and IBM POWER
– Red Hat Enterprise Linux 5 Server for x86, AMD64/Intel(r) 64, Itanium Processor Family, IBM POWER,
S/390 and System z
– Red Hat Enterprise Linux 5 Desktop for x86, and AMD64/Intel(r) 64

Red Hat notes that the supplied beta packages and CD images are intended for testing purposes only and that this early access software is not supported and is not intended for production environments. The beta testing period is scheduled to continue through September 4, 2007.

Some new features:

Virtualization improvements

+ Completion of virtualization support on Itanium2 platforms: PV (para-virtualized) and HV (fully-virtualized) Itanium2 guests are now fully supported on Itanium2 hardware (HV requires hardware support)
+ Support for 32-bit PV guests on 64-bit AMD64/Intel(r) 64 hosts
+ Improved support for HV guests
– Hot-migration support for HV guests
– General performance improvements
– Host infrastructure for PV drivers (the actual drivers will be delivered separately in the context of the individual supported guest operating systems)
+ Update of the libvirt management layer

Storage Improvements

+ Improved support for autofs load balancing with replicated servers
+ Added replication and migration support for NFSv4 referrals
+ Support for installation to and boot from dm-multipath
+ Update of the Sata sub-system
+ Significant stability improvements to the GFS2 file system
+ Ext3 filesystem now fully supports filesystem sizes of up to 16TB
+ Updated CIFS to version 1.48aRH
+ Technology Preview of the iSCSI target device (iSCSI server)

Serviceability improvements

+ Improvements to the “crash”-analysis tool
+ IPMI and HPI updates
+ Added Kexec/Kdump support for the host in a virtualized environment

Networking

+ Update of the Infiniband support and addition of RDMA over Ethernet
+ Technology Preview of the new Devicescape wireless network stack
+ Expand in-kernel socket api
+ IPv6 improvements
+ Technology Preview of new mac80211 wireless stack (ipw 4965 wireless adapter) support. Note the iwl4965-firmware package is not in Beta ISOs but is available on RHN

Windows Interoperability

+ Samba update for improved interoperability
+ PAM/Kerberos and NSS-LDAP updates for improved integration in Active Directory environments

Security

+ Improved auditing
+ Smartcard support for SSH
+ Integration of LSPP certification related changes

I added some emphasis to the Windows interoperability because when SearchEnterpriseLinux.com Site Editor Jan Stafford and I attended Red Hat’s annual summit in May, it seemed like there wasn’t much to their interoperability efforts beyond Samba. A continuation of that effort appears to be going on here.


July 31, 2007  9:00 PM

Linux backup: Seven backup dos and don’ts

ITKE ITKE Profile: ITKE

Ready for some quick tips? Here are six best practices for handling backup, compliments of Anthony Johnson, president and CEO of Storix Inc.

  • Do get a full-system backup, not just your data.
  • Do verify your backups.
  • Do encrypt your backup data.
  • Do always test your full-system recovery process.
  • Do understand your recovery down-time restrictions.
  • Don’t assume your restore process will recover everything as it was.
  • Don’t assume your backup will always be restored to the same hardware.

Johnson shared these tips during a LinuxWorld 2007 preview interview. He also discussed the differences between handling backup on Windows and Linux, saying:

“Windows systems are typically configured for simple storage configuration — a single filesystem on each disk. After a failure, its simplicity means that customers usually must re-partition the disks and re-install the operating system from the original distribution, then restore other files from a backup.”

Linux backup systems require more IT administrator know-how, but deliver greater flexibility and functionality. With Linux, said Johnson, savvy IT managers can take advantage of LVM (Logical Volume Management), software RAID and other options for higher performance, availability, recoverability and security. There is a catch, said Johnson:

“However, flexibility brings complexity, and re-installing the operating system in the same manner as with Windows systems is much too complex and time-consuming for most corporate environments. Linux systems typically come with no full-system recovery tools, so users must find the best backup product to suit their stringent backup policies and recovery requirements.”

Looking for more Linux backup advice? Check out Linux backup software how-to on Alessandro Tanasi’s blog.


July 31, 2007  2:10 PM

‘Big Four’ systems management vendors ripe for open source shake-up

ITKE ITKE Profile: ITKE

A new research report from the New York-based 451 Group has found that the ‘Big Four’ systems management vendors — BMC, HP, IBM and CA — are “ripe for a shake-up from open source systems management players.”

In the past 18 months, open source options in the systems management space have grown to include a new set of vendors, including Alterpoint, GroundWork Open Source, Hyperic, Open Country, The OpenNMS Group, Qlusters and Zenoss. These vendors have made a business out of backing open source systems management projects with commercial-grade support offerings and subscriptions, al la Red Hat Enterprise Linux.

This combination of open source with enterprise services will have an impact on the proprietary systems management vendors, said co-author of the report Raven Zachary.

“Open source is breathing new competitive life into systems management, ultimately forcing the established vendors to respond in their products, pricing and strategies,” said Zachary, the 451 Group open source research director.

However, both Zachary and the report’s other author, analyst Jay Lyman, agree that open source systems management vendors still face an uphill battle against entrenched players with their existing integrated suites and established customers.

The 451 Group found the open source systems management category is dominated by systems monitoring, configuration, provisioning and patching components. That said, open source systems management offerings lack the full feature set provided by the leading proprietary systems management vendors. Still, these core features represent the primary demands from IT end users.

Thus far, Zachary and Lyman have suggested that the the cost savings and flexibility can make open source worth trying out. This is particularly the case when monitoring, configuration and other tasks are easier to swap than comprehensive systems management suites.

“What is clear from interactions with end users is that open source is now deeply entrenched in the software infrastructure of many organizations. End-user organizations that have seen benefits from open source software at the lower level of the infrastructure stack are now contemplating opportunities for open source software in systems management,” Lyman said.

This 60-page report, ‘Managing in the Open: The Next Wave of Systems Management,’ also looks at the emergence of the aforementioned open source systems management vendors and the disruptive impact they are having on proprietary systems management vendors. It reviews the existing open source systems management players, and articulates the similarities and differences among these offerings. It also explores the noncommercial angle and leading open source systems management projects that are being rapidly adopted in the enterprise. The report includes a set of recommendations for end users with regard to open source systems management, as well as recent survey data on the topic.

For more information, visit the 451 Group at www.the451group.com.


July 31, 2007  12:47 PM

10 IT system monitoring best practices

ITKE ITKE Profile: ITKE

Here are 10 best practices for system monitoring that Javier Soltero, CTO of Hyperic Inc., has seen succeed in his years in IT.

  1. Define what it means for a given resource — a server, an application or a service — to be labeled “production”.
  2. Figure out what monitoring you need to satisfy the production requirement.
  3. Implement the monitoring capability, either manually or with open source tools like Nagios or commercial tools.
  4. Define what it means for something to be “broken/unavailable/on fire” — also referred to as WARN/ERROR/CRITICAL.
  5. Implement alerts in your monitoring system to capture these thresholds.
  6. Define what process is to be followed for each alarm level.
  7. Make sure your alerting process follows that notification process.
  8. Create roles/responsibilities for groups of people to share alerts, control and detailed access to relevant their job function.Focusing individuals generally means better performance for their area.
  9. Designate a small number of super-users that architect your entire system of alerts, monitoring protocols, roles, etc., to ensure they follow a single blueprint.
  10. Lather, rinse, and repeat if necessary.

I pulled these tips from a LinuxWorld 2007 preview interview with Javier Soltero. In another excerpt from that interview — Virtualization boosts Linux adoption big-time — he talks about the synergy between Linux and virtualization and challenges posed in managing multiple-operating system environments and identifying and tracking virtual machines. Javier also offered some great comments on other subjects, which can be found in articles from our LinuxWorld and Next Generation Data Center Conference 2007 coverage here.


July 31, 2007  12:40 PM

Red Hat talks the expansion of operating system-based virtualization

ITKE ITKE Profile: ITKE

Red Hat NewsSearchEnterpriseLinux.com is prepared to go into LinuxWorld next week with guns blazing, but we’re not going in so fast as to not post some preview coverage beforehand. Over at our conference coverage page, LinuxWorld 2007: News, trends and tutorial coverage, you can see what’s live and get a feel for some of the trends that have materialized thus far.

One of major trends touched upon before the conference even begins is virtualization. It’s not just for consolidation anymore, as many of you undoubtedly know already. Red Hat was one of the first Linux vendors to get something on this topic into my mailbox this week (Novell was the other), and it was interesting to see what they have up their sleeves right now.

According to Red Hat’s Emerging Technologies Team, the company is finally starting to see how its customers are actually going to use virtualization ( a brief warning; after reading these you will have the uncanny feeling that Red Hat can do no wrong with virtualization):

* Customers are quickly realizing that Red Hat Enterprise Linux virtualization, which is provided as part of the base product for no additional cost, works really well. It’s stable, mature and easy-to-mange.
* Para-virtualization, available for Red Hat Enterprise Linux 4 & 5 guests, delivers performance that is close to bare-metal. So why not use it everywhere?
* Full-virtualization performance is dependent on the application – so it needs to be deployed with some care. But, enhancements due at the end of the year will close the gap with para-virtualization significantly. This will make Red Hat Enterprise Linux a terrific virtualization platform for any Windows system, with better storage virtualization and driver support than is available with proprietary virtualization products, at much lower cost.
* Consolidation frees up systems that can be redeployed as fresh Red Hat Enterprise Linux servers to handle rapidly growing IT requirements.
* The exciting uses of virtualization lie beyond consolidation – they are in areas of high availability, operational flexibility, resource management and enhanced development environment.
* Red Hat Enterprise Linux Advanced Platform, with its comprehensive storage virtualization capabilities, can save you from having to purchase lots of other expensive software.
* Live migration, which is also included in the base product, is the key to flexibility.

We’ll know more after a briefing or ten next week at the show to get a good, objective look at these various points. For now, however, I suggest you check out our preview coverage and wait for the live stuff as it trickles in from the show.


July 30, 2007  4:40 PM

Virtual directories: Identity management and data integration panacea or placebo?

ITKE ITKE Profile: ITKE

Virtualizing directories is an increasingly-deployed technique for handling some identity management issues, secure data sharing and centralization of data resources. Among other things, a virtual directory enables integration of user identity information in disparate applications in an enterprise.

In this post, I’ll share some info about why virtual identity technologies are being used today, gleaned from some Web resources (see links at the end of this post) and my recent conversation with Dieter Schuller, Radiant Logic’s Vice President of Sales/Business Development, and Dan Beckett, Technical Strategist.

Schuller, Beckett and I talked about the uses of virtual directories, and not specific products, although – of course – Radiant Logic has one in this area. I got a glimpse of Radiant Logic’s RadiantOne VDS product at the recent Burton Group Catalyst Conference in San Francisco. Radiant’s next SF stop will be the LinuxWorld/Next Generation Data Center Conferences Aug. 6-8.

For background, Becket and Schuller shared this sound bite about virtual directories from Burton Group analyst Dan Blum:

“As e-business usage expands, and as the enterprise evolves internally through mergers, acquisitions, and other change drivers, directory architecture inevitably drifts in and out of sync with the users and applications. The ability to ‘virtualize’ directory services — to not care which directory product (or database product) is employed or how many are employed — has become an important capability for IdM infrastructure, which must mediate between the changing applications and the stable directory services.”

Beckett further explained that virtual directories can be used to leverage identity management initiative by virtualizing information from several sources within an enterprise. Essentially, virtual directory technology consolidates data while removing inconsistencies and duplications within lists and enabling customization of authentication and modification functions. The end result should be reduction in memory used to store and share that data and, therefore, an increase in memory available to use for other purposes.

“Businesses have built up silos of data. Each silo is valuable and critical to the business, but the silos usually have restrictive rules about how that data can be used by other initiatives,” Beckett said. “Examples would be the security data inside a mainframe system or in Active Directory. To leverage that data for a portal or collaboration or another initiative would be difficult.”

Usually, said Schuller, the silos were built up because internal departments or individuals had a job to do, or a money-making initiative to deploy, and didn’t want to wait for corporate IT to set up their database or other application. Also, mergers and acquisitions create silos. He explained:

“Even if you build using legacy tools like Active Directory, you can end up with silos in a homogenous environment that can’t work with each other.”

So, how do you consolidate all that data without people having to give up the ownership of that data? Virtualization can make that data available to all, but the data owners are allowed to enact rules about how that data is shared and used.

“What’s needed is single directory with a single schema. Virtualization makes that data available via a single protocol, such as LDAP,” said Beckett. “Virtualization makes all the disparate silos look the same, and it’s easy to share and manipulate them to meet the needs of applications coming in.”

Usually when identity management problems came up, Schuller said, people took two approaches: use a metadirectory, which enables data flow between directory services and databases to maintain synchronization; or create an operational data store (ODS), a type of database in which contents are updated through the course of business operations.

Unfortunately, said Schuller, business requirements come down the pike faster than most IT shops’ infrastructure team can handle. ODS and metadirectories both create “a monolithic view that can’t flex with business requirements; but virtualization allows you to create multiple views that can flex for future apps and permutations.”

(Not everyone agrees with this assessment, as you’ll see in this post: Virtual vs. meta.)

Active Directory (AD) users, in particular, could benefit from virtual directories. “Active Directory isn’t going away anytime soon, so you need to leverage the data inside AD for all apps,” said Beckett. Schuller noted that AD isn’t designed to hold huge customer profiles, and people end us creating a huge database to do that. It’s easier, he said, to virtualize info from all silos. Also, people are reluctant to extend the AD schema. So, being able to virtually extend the schema is much less intimidating.

Schuller and Beckett told me about their work on a virtual directories project for a large cable services company, which had many separate authentication silos and many databases. Customer data was parsed out in separate databases by, say, customer name, address, location of devices (like set-top boxes) and services provided.

“They needed a unified picture of that customer, and that unified customer profile was only achievable via virtualization,” said Schuller. “There was no way –physically and politically — that they could create the mother of all databases and have that all in one place. Via virtualization, you can gather the data in one place and correlate each bit of data one to another.”

SearchEnterpriseLinux.com News Writer Jack Loftus will be covering this topic in more detail during the LinuxWorld and the Next Generation Data Center Conferences and afterward.c. So, drop Jack a line at jloftus@techtarget.com if you’re using virtual directories, know a lot about them or think they’re not what they are cracked up to be.

Here are some links to more information on virtual directories:


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: