Log files may be the most important piece of forensic information we have when determining why a server or application crashes. However, warnings of such a distaster are available to IT administrators. They just have to know where to look (hint: what do you think log files are for?)
Looking for a repeating pattern in a list one thousand items long might seem daunting, but luckily there is help. There’s no need to fear, Splunk is here.
Splunk is an amazing little web application (currently at version 3.1.3) that indexes just about any type of log file you can think of. Not only does Splunk index the information, but it presents it as a beautiful, easy-to-use, web application (purists need not worry, you can access the information from a terminal as well.) So you say, what is the big deal about searching log files? You say that you can do that with grep. That is true, but Splunk is hundreds of times more powerful and excels in four areas:
Splunk can index logs from a number of sources:
- Files and directories
- FIFO queues (pipes)
- Network ports (syslogging directly to Splunk)
Splunk enables you to tail log files, the contents of entire directories, pipes, and even open ports for applications to send their logs directly to Splunk itself (although I recommend using a separate syslog server in order to maintain a file-based log rotation history.)
Also, Splunk is more physically appealing than grep (no offense, grep). To give you an idea of what data looks like in Splunk take a gander at this screenshot:
Looking at log files has never been so much fun!
This is where Splunk really outshines its command line competition. Imagine you wanted to comb your log files to figure out which VM has had the most number of VMotion events in your VMware Infrastructure? With Splunk that is as easy as pie — a pie chart, that is:
Splunk allows you to easily query the data using SQL in order to build complex analysis reports. And if that was not enough…
Splunk not only allows administrators to easily determine the goings-on of their servers through log file analysis, Splunk also allows administrators to share their logs with the rest of the Splunk community. Imagine this scenario: a major website’s web servers are crashing and the website’s administrators cannot figure out why. As an interner business, their primary point-of-sale is the web; so if their web servers go offline that is very bad. The administrators are pulling out their hair trying to figure out the problem when one of them realizes they haven’t checked Splunk. Because the administrators at Amazon are participating in SplunkBase they can analyze not only their log files but also the logs of anyone else who uploads logs to Splunk’s community. Bingo! They discover that the problem was a lock that was not getting destroyed.
By themselves, the administrators did not have a large enough data set to determine the problem, but because others had generated similar logs and figured out the problem already, the website admins were able to quickly resolve the issue.
I’ll say it again, Splunk is great. Apart from VMware Server, Splunk may be my favorite server application to come along in the past few years. I cannot imagine running an enterprise data center without Splunk. See you on SplunkBase!
The first alpha release from the growingly popular Ubuntu Linux is now available. The build is clearly marked as a not ready for prime time player, but offers a sneak peak at the next release, 8.04, expected in April 2008. You can also publicly contribute to the bug tracking mechanism should you choose. I installed the alpha release, Hardy Heron Alpha-1 which was generally indistinguishable from other Ubuntu releases, namely the gutsy 7.10 release.
Some takeaway notes about this alpha release are that it includes Xorg 7.3 for the X Window System manager and pulls in some Debian changes as well. It uses kernel 2.6.22, which is the same as gutsy 7.10. Comparatively, Red Hat Enterprise Linux 5.1 is on the 2.6.18 kernel and Novell Suse is on the 188.8.131.52 kernel. If you use Ubuntu 8.04 Server, keep in mind that packages may detect a newer version of the kernel and want to recompile. A good example is VMWare tools for guest operating systems.
Canonical Ltd. does not support the alpha releases of Ubuntu, which is to be expected. When 8.04 is released after the community development process is complete, Canonical will support the end-state product.
More information about the Ubuntu release can be found at: http://www.ubuntu.com/testing/hardy/alpha1
One of our editors at SearchDataCenter.com created a Google map of international data centers to go with a story, but he was kind of “cold” to Linux:
So, we thought we’d ask some of our Linux experts for their opinions on the international data centers. Here’s what they had to say:
- Pros: Central location, strong infrastructure
- Cons: The French
- Pros: Cheap land, cheap labor
- Cons: Casual Fridays = mankinis
- Pros: Low energy costs, cooler temperatures
- Cons: Data center actually a zamboni. Still runs better than a Vista desktop.
If you want to put in your two cents, email me.
CentOS released version 5.1 of its increasingly popular distribution, which is celebrating its fourth birthday this month. Version 5.1 includes 70 new packages native to the distribution including 17 in the series of the Yellow dog Updater, Modified (yum) package updater tools, 11 in the Standards Based Linux Instrumentation for Manageability (sblim) management tools and an assortment of various other packages. Updates in the distribution include Apache, php, kernel-2.6.18, Gnome, KDE, OpenOffice.org, Firefox-1.5 and PostgreSQL. The full release notes of the build of the distribution are available at CentOS.org.
CentOS 5.1 is based on Red Hat Enterprise Linux (RHEL). It remains completely free and offers the compatibility and reliability of RHEL without the costs of build certification and support contracts. This makes CentOS a wonderful test bed or quality system that does not require the full resources (hardware, software costs, support costs) of the enterprise production builds.
What are Red Hat, Novell and Canonical going to have to do in 2008 to in order to dominate the desktop and server Linux market?
Let’s take a moment and assess the situation. Red Hat is the dominant force in Linux right now. They own the enterprise market. SUSE is also supported by many IHVs as a ready-to-install operating system (OS), but does not have nearly the market share as flouted by the fedora. Ubuntu is the little Linux OS that could and, in the last three years, it has gripped the desktop Linux market with a stranglehold and will not let go.
It seems that each distribution has found a niche: Red Hat and Ubuntu are the leaders in their markets, and SUSE is a comfortable runner-up. However, history has shown us that businesses are not content to stay still too long or play second fiddle. So, what will Red Hat, SUSE, and Ubuntu have to do in the new year to gain new ground?
I’ve been using Red Hat Linux since the mid-90s. They are arguably the most successful proprietors of Linux ever. Red Hat figured out what many companies are just now figuring out about virtualization: it’s not always about the core technology, it is about how you support and manage that technology. Red Hat provides a better support and management structure for their products than any other Linux vendor. It is no wonder they dominate the enterprise market.
On the flip side of the coin, Red Hat has long since been usurped as leaders on desktops. There was Slackware, then Gentoo and now Ubuntu. Sure, Red Hat sponsors the Fedora Core project, but it does not have the market share to be considered in the same game as Red Hat. In the coming year, Red Hat needs to get rid of the Fedora Core moniker and reel its desktop community back in under the auspices of the Red Hat name. Red Hat is associated with stability and the enterprise: they need to create a desktop product that also has these associations. True, Red Hat offers its Enterprise Linux Desktop product, but it lacks the bleeding-edge features of Fedora Core that make the latter so appealing to the desktop crowds. Red Hat must figure out how to transition the passion of the Fedora Core audience back into the house that the Fedora built. Once Red Hat is able to recapture those users, then it can finally offer a datacenter-to-desktop computing solution that can dominate servers and workstations everywhere.
Novell has been one of the most prolific innovators in the IT industry for over the past two decades. Unfortunately, the company that should be a global IT leader today has suffered one bad management and marketing decision after another. Case in point: all this nifty, gee-whiz technology called Compiz for desktops originated with Novell. Do most people know that? I doubt it. They’re probably more familiar with the rift between Beryl and the original Compiz developers (and subsequent kiss-and-make up).
The reason that Novell barely gets credit for its work is that its marketing team never leads with anything remotely innovative. If they played it any safer they’d be asleep! Remember iFolder? Unless you’re a fan of information synchronization software you probably do not. iFolder was a Novell project that offered unparalleled functionality in the arena of client compatibility and server features. What happened to it? Novell did not know what to do with it and open sourced the code in order to wash their hands of the project.
In the next 52 weeks Novell needs to do what they do best: innovate. Then they need to do well the thing they do worst: they need to lead with their innovation. They need to create a mass marketing campaign around SUSE Linux and its new innovative features that will leave the other vendors in the dust. Novell needs to stop playing the shrinking violet and give a new generation of Linux users a reason to hold Novell SUSE Linux high above the other distributions.
Ubuntu has become the desktop user’s Linux of choice in the past three years and shows no signs of slowing down. Canonical understands what Novell does not, and that is marketing. The marketing machine behind Ubuntu has been working non-stop. Additionally, it does not hurt that Mark Shuttleworth, Canonical’s founder and CEO, is as charismatic as Steve Jobs and is forming deals with independent hardware vendors that results in Ubuntu being offered by the likes of Dell on their laptops and desktops.
Canonical is correct in that their next move should be to penetrate the server market. From their server and JeOS versions of Ubuntu to their alliances with IHVs in hopes of getting Ubuntu officially supported on server hardware, they are doing everything correct. However, they could be doing more. Canonical is in the unique position of having herds of passionate users behind them. (Actually Apple is in the same position, but they seem to have forgotten that they are a computer company.) They have a loyalty base not seen on this side of OS X. Canonical needs to leverage this loyalty and create a vertical initiative that will provide even more features to its desktop users as long as the servers said users are connecting to run the Ubuntu OS. Think Bonjour for Ubuntu. There is no reason that Canonical cannot achieve this with Open Source projects either. From integrating Beagle with ZeroConf to collaborative TomBoy notes-sharing technology. It is all possible.
The ultimate achievement would be when Canonical finally creates an Active Directory-like system to integrate its server OS and desktop OS into a single, manageable environment.
A three-way see-saw
The Linux market is currently a three-way see-saw. Any of the big three vendors could change the balance of things. Do you have a different outlook? I’d love to hear it!
The Ubuntu repositories are lagging when it comes to keeping up with Openswan development. Currently, the latest package in the Feisty pool is 2.4.6 and 2.4.11 was just released (with several bug fixes — one that allows newer OS X clients to connect). The l2tpd package that provides an l2tp daemon is also old due to the fact that development on the l2tpd projects seems to have stalled. The maintainers of Openswan, xelerance have forked l2tpd, creating x2tpd.
Both pieces of software have already be Debianized, it is just a matter of running dpkg-buildpackage to create the binary package files. I have updated the changelogs for both Openswan and xl2tpd and created deb packages out of the latest source code:
I will be writing an article on how to configure Ubuntu with Openswan and xl2tpd. Stay tuned!
Hope this helps!
So I am on a VPN kick lately; I wonder if it shows? I spent the last week setting up and tweaking Openswan on an Ubuntu box in order to allow me to connect to my home network with my MacBook Pro. I finally got it working — you can see some of the fun gotchas you might run into when using Leopard to connect to Openswan at my own blog — but I could not actually see anything on my home network. Well, it turns out I seem to be a special case. (My wife is insisting that I had a prefix “head” to case). My VPN box was never previously a part of my home network topology. It was a DNS and DHCP server, but it played no role in packet switching or forwarding. I guess most people install VPN software on a Linux box that is already a router of some sort. Thus the Kernel did not have packet forwarding turned on and the VPN server was not forwarding packets to the rest of the network.
To turn packet forwarding on simply issue this command:
echo "1" > /proc/sys/net/ipv4/ip_forward
After you do this the packets will flow! Of course, I would have known about this a lot sooner if I had used the “ipsec verify” command. This command will check your system to see if it is properly configured to run Openswan and tell what you need to do in order to get it into a ready state.
Hope this helps!
For what it’s worth, a test by Redmonk analyst Stephen O’Grady of Virtual Machine Manager (VMM) on Fedora 8 was “not so much success as much as total failure.”
After overcoming some preliminary hurdles (i.e., a lack of Live CDs for x86- and 64-class machines, the inability to run Fedora off a memory stick, and the need to manually add the virtualization packages), O’Grady got things stable enough to try run an Ubuntu Gutsy Gibbon virtual machine (VM). But alas, it was not meant to be:
Opening Fedora’s Virtual Machine Manager, which is a very nice piece of software despite my lack of success with it, I first attempted to virtualize Ubuntu Gutsy. Start with what you know, I figured. Curiously, however, I had to manually start the libvirt daemon as it hadn’t been initiated despite the install and subsequent start of the Virtual Machine Manager. Once that was figured out, I created a Gutsy image and began the install. Everything went smoothly until it failed, complaining of “corrupt packages” and an inability to connect to the network. Given that I later used to [sic] the same disk to create an instance of Gutsy on VMWare, I think the media is fine. And the network connection worked beautifully during the initial part of the installation. So I did the easiest thing I could think of, and gave up on Gutsy.
He experienced the same sad story with Windows Server 2008:
Instead, I’d try to virtualize Windows Server 2008, as I knew Senor Dolan had successfully done just this. Forgoing his instructions, which were command line based, I thought I’d try to use the GUI just as an experiment. That failed, as it was rumored it might, due to ACPI [Advanced Configuration and Power Interface] errors. At which point I turned to Michael’s CLI method, which also failed because Fedora returned a “command not found” to kvm. Turns out that kvm is instead qemu-kvm on Fedora: my bad. After I figured that bit out, the install of Windows Server 2008 promptly bluescreened.
Ditto for Vista:
Next, I tried to virtualize Vista. Blue screen numero dos. Complicating my efforts with all of the above were the slight differences between Fedora and Ubuntu; you can’t modprobe on Fedora, apparently, even as root. Nor can you sudo, by default – su is your only option.
At that point, our intrepid analyst “cut bait” and decided to retreat back to VMware running on Ubuntu.
What’s remarkable about all this, though, is O’Grady’s tenacity and continued open-mindedness in the face of adversity. For all its flaws, Fedora Virtual Machine Manager also has its share of virtues, despite the fact that they’re well hidden:
The Virtual Machine Manager, as mentioned, comes with a nice, simple GUI that’s very usable once you get the hang of it (connect to the localhost before you try creating VMs). More importantly, the virtualization technologies under the hood in KVM and Xen are more capable than many realize. Xen, as an example, is the underlying technology behind Amazon’s EC2, and KVM is hugely popular with the distros due in part to its light weight. Both are eminently capable of virtualizating the very operating systems I failed to, and my lack of success is at least as attributable to me as it is to the packages themselves.
O’Grady remains undeterred and promises to check in with Fedora VMM again in a couple of months.
For the complete text, click here.
When you think mono, you think tired. You think sleepy. You think shut-yourself-up-in-your-bedroom-for-two-weeks-and-snooze-like-Rip-Van-Winkle. You get the idea.
But that is not how it should be. Mono isn’t boring. Mono should excite people! I am speaking, of course, of the open source implementation of the Microsoft .NET Framework. It should make them stand up and say, “Wow! Here is one of the most interesting projects I’ve seen in a long time in the wonderful world of Linux.” And yet, this is not the case. People are taking their prescription sleep aids, turning off their cell-phones and settling down for a long winter’s nap. So why is it that one of the most ambitious projects that I have ever laid eyes on is not garnering more enthusiasm?
Misconceptions about Mono
No matter what you know or how smart you are (or think you are), it can be near impossible to change people’s perceptions. This is especially true in the software world (see Apple). And unfortunately for Mono, there are a couple misconceptions that keep it from gaining ground.
1. People think Mono is simply a derivative of the Microsoft .NET Framework.
Even though the Mono FAQ (http://www.mono-project.com/FAQ:_General) points out that this is not the case, the common perception keeps many people from using Mono to build their projects.
2. People assume that Mono is not ready for the enterprise.
One of the reasons for this is because not many enterprise projects are being built with it (I’ll get to that later.) Instead, Mono is primarily being used to construct desktop software.
The second reason for this misconception is the Mono project’s inability to stay in step with Microsoft .NET. Currently, Mono is somewhere between .NET version 1.1 and 2.0 while .NET 3.5 was just released. This is not the Mono team’s fault. Microsoft does not collaborate with them, so everything the Mono team accomplishes is through their own blood, sweat and tears. Nevertheless, this version discrepancy creates the perception that Mono is just a .NET wannabe.
How Mono can improve
I have a few ideas on how Mono can realize its potential:
Software With the exception of iFolder, Mono is not being used to develop any truly useful enterprise applications. Great desktop applications are being created, like Tomboy and Beagle, but no one has created the next great server application using Mono. Until this happens, IT administrators won’t see Mono as an equal to the other common framework, Java. My suggestion is that more effort be put behind iFolder, as it is already a very useful application. With some work, iFolder could compete with Xythos Webspace and be a poster-child for Mono in the Enterprise.
Ubiquity I cannot tell you how many IT administrators have been hesitant to use software I have written in C# for the sole purpose of being cross-OS compatible via Mono because Mono was not installed on their servers. Java is installed by default on many Linux distributions, making it an easy development choice. The Mono team needs to work more closely with Linux distributions to ensure that Mono is pre-installed, making it an easy choice to use.
Memory Leaks If you have ever used Beagle you know that it can heinously crash your system. Since it’s inception, Mono has been plagued with a random memory bug. Your system memory will go from 1% to 1000% in a matter of seconds, without warning. If the Mono developers want to make Mono more than just a desktop hobbyists language then they need to fix this bug once and for all.
Python Effect There is a huge movement in the Gnome community to make Python the standard language for Gnome development. Mono is a close second, thanks in part to the great desktop applications being written with it. However, if Python is officially adopted, there will be a backlash against Mono, or pressure on developers to adopt Python and port their once Mono applications to the official language. In order to prevent this from happening, Mono developers need to demonstrate Mono’s cross-compatibility. The Mono team needs to have Mono installed by default into Linux so that if you write an application with Mono it can run in Windows AND Linux (and even OS X).
Python is also cross-compatible, but I do not foresee Python being installed by default on Windows. Mono stands the best chance at being the first cross-compatbile language out of the box. This is Mono’s best play to fight off the Python effect.
Mono is a great framework and C# is a tremendous language. I like them both very much, but I am ready to move to Python myself if I do not see more initiative by the Mono camp to make Mono more accessible. Having a great language and product is not nearly enough. Again, Apple is a good example of this. Sure, OS X is great, but the real reason Apple resurged is because of their darn good marketing team and the iPod. From the iMac commercials to Steve’s securing of digital music distribution rights, it was a master marketing strategy. The Mono team needs to think outside the IDE and start securing Linux distribution partnerships to build some fantastic reasons to use Mono in the enterprise.
A few weeks ago, Oracle unveiled its new virtual machine (VM) product, Oracle VM, based on the Xen hypervisor. But why is Oracle introducing its own VM when it already has the Xen VM that comes with the Red Hat code? For an answer, we turn to SearchEnterpriseLinux.com expert Don Rosenberg, who fills us in on what Oracle VM means for users, the competition and for Linux in the enterprise.
Red Hat ships the Xen VM with Red Hat Enterprise Linux (RHEL), with the same Red Hat source code that Oracle uses to build its Unbreakable Linux operating system.
But Oracle chose to go directly to Xen.org to download the source code for its own Oracle VM. While Red Hat runs Xen from within an operating system, Oracle runs its VM on a server. From here the Oracle VM deploys agents or images to computers without an operating system on them, creating virtual servers.
Oracle describes its VM as a console for the management of Xen, complete with a built-in operating system, making it a software appliance. The appliance has paravirtualized drivers for RHEL 4 and 5 but currently runs Windows without paravirtualization, resulting in sluggish Windows performance. Oracle claims its VM is three times more efficient than the leading VM (presumably VMware), but this comparison does not refer to speed so much as to the use of resources on a box. On a box that needs an OS and VMware installed, running via VMware would take up roughly three times the resources; Oracle’s software appliance saves space.
Oracle’s virtualization strategy
The Oracle VM is free to download and use; those wanting support will have to sign up for a paid plan. But Oracle says that its virtualization solution is still cheaper than Red Hat’s. RHEL supports some virtualization (at no extra cost), but full-blown implementation requires an additional product: Red Hat Enterprise Linux Advanced Platform.
The Red Hat solution calls for Red Hat-certified products from third parties, but an Oracle VM will run only Oracle databases, middleware and applications. By releasing its own VM, Oracle avoids third-party complications (such as software dependencies and support finger-pointing) and third-party payments. It also ends up controlling the software stack from top to bottom, including virtualization.
One can presumably find Oracle VM customers among the 1,500 that Oracle says already pay for Unbreakable Linux support (Dell, Stanford University, McKesson and Mitsubishi, among others). The unknown number of customers already running Oracle on VMware now have to decide whether to accept Oracle support (along with the Oracle VM) or continue to run on the competitor’s product with support from Oracle. Oracle customers who already run on Xen will find the switch to Oracle VM easier, of course.
Oracle says it has 9,000 developers at work on its software products, including Linux, and points out that Red Hat’s total employees amount to only 2,000. Oracle needs all its software skills to track Red Hat as closely as possible. Red Hat is upping the ante by announcing that in 2008 it will offer software vendors the Red Hat Appliance Operating System. Applications can be written to this layer to produce a software appliance that will run on any Red Hat system, physical or virtual, no matter where it is located.
Linux has accelerated virtualization
The Age of Virtualization is upon us, and I don’t believe we would have gotten this far this fast without open source software. Virtualization and VMware originated on mainframes, and when IBM finally “got it,” it used Linux to revive a company that was sinking slowly into the past. By adopting Linux, it came up with an OS that could be used on all of its hardware. And by applying its mainframe know-how, it came up with such marvels as the mainframe that could configure itself to be multiple-server instances by day, then turn back into a mainframe at night (for order taking and order batch-processing, respectively) or any combination of mainframe and servers). Moving client/server over to mainframe virtualization eventually gave way to cloud computing. Combined with grid computing, servers and applications are now thought of as “somewhere out there” in a virtual space. Because IBM made these improvements to Linux, the code was fed back into the Linux kernel, which was meanwhile being improved from the other direction (such as hundreds of servers being linked to form a mainframe). The invisible hand of the free market supplied a wealth of code that could be freely downloaded and reworked for anyone’s use.
All this happened in a world in which the dominant computer systems in businesses were desktops that eventually (with the help of open source BSD code) managed to form networks. They used one type of processor design (Intel) and one brand of operating system (Windows). VMware caught the eye of open source developers not only because it allowed network technicians to design, build and test networks while using only a single box, but because it took on the problem of how to use both Linux and Windows on a single box without rebooting.
This achievement rattled the windows in the Wintel offices. A few years earlier, Netscape boasted prematurely about its plans to build a platform that would be OS-independent and died as a result. And IT departments, tired of having to do separate installs for each Windows box, admired the way Linux could be shot over the wire to an unconfigured machine. This was an early virtualization concept that looked ahead to a world we may yet enter, one where the end user’s processor and software may be something other than Wintel. Porting apps would be less necessary if they were written to a layer high enough above the operating system(s).
Long ago, IBM and Apple had a joint venture to develop such a layer. The plan was to use layers to effectively virtualize operating systems and processors. Taligent collapsed from the weight of its own ambitious plans, but we are a lot closer to its goal. Now even Microsoft is getting into virtualization, competing with Red Hat and Oracle to build virtual data centers that most effectively use resources in real time.
Now that the open source Xen project has taken on some of the functions of VMware, what will become of this proprietary product that had so much to do with the current virtualization surge? It is difficult in an expanding market to say that VMware’s sales will drop, for it is already giving away the low-end server version of their product. Because it handles many more operating systems and does more things than Xen, VMware will survive in a specialized marketplace. The question is, will Xen push down VMware prices? Or, as with the move from CentOS to RHEL, will Xen’s position at the low end of the market serve to support a high price for VMware?