“If you don’t evaluate Linux desktops, you may be liable to your shareholders because you’re spending an enormous amount of money to upgrade to Vista without a demonstrable ROI,” said open source developer turned Novell exec Nat Friedman during our recent conversation.
“You’ll spend $30 million to upgrade to Vista, and you’ll have exactly the same functionality and same level of security challenges we had before. Instead, you could spend less money and not have to replace hardware by going with a Linux desktop, along with having fewer security challenges, built-in sandboxing for your applications, inclusion of an office suite…and all at a tenth of the licensing cost of Microsoft. Oh, and there’s better manageability.”
During our talk, Friedman addressed his comments particularly to IT managers who run Microsoft-only desktop shops. Even though Friedman is a long-time Linux desktop evangelist and biased, I think he made some good points, and I’ll share them in this post. Friedman is Novell CTO/Strategy Officer for open source and was co-creator of the Novell-acquired Ximian desktop applications for Linux and Unix.
The introduction of Vista sets the stage for exploring other options, giving IT managers — even those whose bosses are phobic about running anything other than Microsoft desktops — a good reason to do evaluations, he said.
“Vista just came out, and it’s very expensive, requires you to refresh your hardware, includes user interface changes that require you to retrain your users, requires you to retool your backend systems. It also brings expensive transition costs. I can’t think of a reason not to, at the very least, do an evaluation of Linux desktops.”
IT managers will lose nothing from starting a pilot program for Linux desktops at this stage. “If it doesn’t work for them, so be it.” Friedman is sure, however, that Linux desktops will probably work for part of most organizations. Besides the benefits offered by using Linux desktops, he says, there’s another plus.
“Just think, if you deploy Linux desktops in part of your organization, you’re going to have more leverage in your next pricing discussion with Microsoft. You can say, ‘We have 200 Linux desktops that work great and cost less, and we’re thinking about going to 2,000. You need to lower the price.'”
The release of Vista is not the only reason why this will be a good year for deployments to increase. For one thing, Friedman said, there are no technological barriers, because the Linux desktop software is sufficiently usable and interoperable. For another…
“More people leaning towards a Web-based application model — like Salesforce.com and Google — and that will lead to them being less dependent upon Microsoft for running applications.”
Linux desktop adoption is steadily increasing worldwide, but people don’t recognize this fact because there hasn’t been a big adoption surge, Friedman said.
As the moral of Aesop’s tortoise and hare fable goes: Slow and steady wins the race.
Linus Torvalds, the father of Linux, has been uncharacteristically vocal in the media arena over the past six to eight months. I think this probably has something to do with the vitriolic atmosphere that now surrounds the GPLv3 debate, but I could be wrong. That happens a lot.
Today, I learned he had recently penned a post at LKML.org that basically ripped Sun a new one over their true intentions with open source software.
Yes, they finally released Java under GPLv2, and they should be commended for that. But you should also ask yourself why, and why it took so long. Maybe it had something to do with the fact that other Java implementations started being more and more relevant?
Am I cynical? Yes. Do I expect people to act in their own interests? Hell yes! That’s how things are _supposed_ to happen. I’m not at all berating Sun, what I’m trying to do here is to wake people up who seem to be living in some dream-world where Sun wants to help people.
My my. One would think Jonathan Schwartz would have something to say about this… And he does! In a blog entry up today, the Sun Microsystems CEO responds directly to Torvalds in a post entitled “An OpenSolaris/Linux Mashup.”
First, I’m glad you give credit to Sun for the contributions we’ve made to the open source world, and Linux specifically – we take the commitment seriously. It’s why we freed OpenOffice, elements of Gnome, Mozilla, delivered Java, and a long list of other contributions that show up in almost every distro. Individuals will always define communities, but Sun as a company has done its part to grow the market – for others as much as ourselves.
But I disagree with a few of your points. Did the Linux community hurt Sun? No, not a bit. It was the companies that leveraged their work. I draw a very sharp distinction – even if our competition is conveniently reckless. They like to paint the battle as Sun vs. the community, and it’s not. Companies compete, communities simply fracture.
And so on and so on, and around we go. Schwartz implies he “loves’ the direction the GPL is headed. IN the past, Torvalds was cold to it, then warm, and then warmer still just last week when he said, “I don’t think the GPLv3 is as good a license as v2, but on the other hand, I’m pragmatic, and if we can avoid having two kernels with two different licenses and the friction that causes, I at least see the _reason_ for GPLv3. As it is, I don’t really see a reason at all.”
Yes folks, that’s what’s passing as “Linux warms to GPLv3” in the media today.
So, could Sun be open sourcing things like Java faster than they have? I’m sure many people out there think they could. But Schwartz has his reasons:
Why does open sourcing take so long? Because we’re starting from products that exist, in which a diversity of contributors and licensors/licensees have rights we have to negotiate. Indulge me when I say It’s different than starting from scratch. I would love to go faster, and we are all doing everything under our control to accelerate progress. (Remember, we can’t even pick GPL3 yet – it doesn’t officially exist.) It’s also a delicate dance to manage this transition while growing a corporation.
Schwartz concludes with a paragraph that asks Torvalds to lay down his sword and a promise to not engage in any patent nonesense. Whether the call is heard from Linus’ Linux Tower remains to be seen. As far as Schwartz is concerned, it seems, the ball is in Torvalds’ court.
Linux Torvalds made some comments today on the GPLv3 (which he hated, then warmed to, and has since warmed to again. Honestly, how warm can a guy get over a software license?):
“I was impressed in the sense that it was a hell of a lot better than the disaster that were the earlier drafts,” Linus Torvalds explained in reply to a comment suggesting that he was impressed with the final draft of the GPLv3. He went on to add, “I still think GPLv2 is simply the better license.” The discussion began with a suggestion that the Linux kernel be dual-licensed GPLv2 and GPLv3. Linus noted, “I consider dual-licensing unlikely (and technically quite hard), but at least _possible_ in theory. I have yet to see any actual *reasons* for licensing under the GPLv3, though. All I’ve heard are shrill voices about ‘tivoization’ (which I expressly think is ok) and panicked worries about Novell-MS (which seems way overblown, and quite frankly, the argument seems to not so much be about the Novell deal, as about an excuse to push the GPLv3).”
That last bit is pretty telling. One has to wonder if the “controversy” is really the work of the community as a whole, or just the vocal minority. A passionate vocal minority, sure, but a minority none-the-less. I think most IT guys out there are still of the “just give me what works best for me variety.” If that’s what Microsoft-Novell are offering (or saying they will offer, actually), then so be it for them. I could be wrong, as this patent stuff is still pretty crazy right now.
IBM DeveloperWorks is running an in-depth look at the inner working of the Linux kernel today, so all you Windows admins who sneak over here to SearchEnterpriseLinux.com on your lunch breaks should go check it out. Virtual file systems, drivers, system call interfaces, and a host of other Linux kernel subsystems are yours for the viewing. And according to the article, Linux currently boasts 6 million lines of code. At which I say, just code? How about six million lines of fun?!
Perhaps the most interesting bit of information over at DeveloperWorks however, are some of the new features now found in the most up to date stable version of the Linux kernel.
Linux, being a production operating system and open source, is a great test bed for new protocols and advancements of those protocols. Linux supports a large number of networking protocols, including the typical TCP/IP, and also extension for high-speed networking (greater than 1 Gigabit Ethernet [GbE] and 10 GbE). Linux also supports protocols such as the Stream Control Transmission Protocol (SCTP), which provides many advanced features above TCP (as a replacement transport level protocol).
Linux is also a dynamic kernel, supporting the addition and removal of software components on the fly. These are called dynamically loadable kernel modules, and they can be inserted at boot when they’re needed (when a particular device is found requiring the module) or at any time by the user.
A recent advancement of Linux is its use as an operating system for other operating systems (called a hypervisor). Recently, a modification to the kernel was made called the Kernel-based Virtual Machine (KVM). This modification enabled a new interface to user space that allows other operating systems to run above the KVM-enabled kernel. In addition to running another instance of Linux, Microsoft Windows can also be virtualized. The only constraint is that the underlying processor must support the new virtualization instructions. See the Resources section for more information.
Flexible. Open. KVM and hypervisors. Sounds pretty good to me.
This is just a quick link to a ComputerWorld interview with Mark Shuttleworth detailing the success of Ubuntu to close out the day.
Wouldn’t have it any other way, right? Well, if you like Ubuntu anyway. So tell me about it, and why!
I think it would be right to assume that the Microsoft-Novell partnership can only become even more controversial in the coming months than it was when first announced last November, simply due to the fact that there’s so much more we could and will learn about the partnership as time goes by.
Today’s revelation that Microsoft has hired Tom Hanrahan, former director of engineering at the Linux Foundation, to head the Interoperability Lab does little to dispel that belief. The Interoperability Lab is a jointly run operation for Microsoft and Novell designed to make Novell’s SUSE Linux Enterprise Server 10 run on Microsoft Virtual Server 2005.
For all the controversy, however, the partnership rumbles ever onwards.
A column at the New York Times keeps the Linux patent debate in the spotlight for one more day…
What a difference 16 years makes. Last month, the technology world was abuzz over an interview in Fortune magazine in which Bradford Smith, Microsoft’s general counsel, accused users and developers of various free software products of patent infringement and demanded royalties. Indeed, in recent years, Mr. Smith has argued that patents are essential to technological breakthroughs in software.
Microsoft sang a very different tune in 1991. In a memo to his senior executives, Bill Gates wrote, “If people had understood how patents would be granted when most of today’s ideas were invented, and had taken out patents, the industry would be at a complete standstill today.” Mr. Gates worried that “some large company will patent some obvious thing” and use the patent to “take as much of our profits as they want.”
Today Microsoft tends a herd of more than 6,000 patents. The column’s bottom line (which it reinforces with excerpts from the legal battle brewing between Verizon and VoIP provider Vonage): patents are bad for the software industry.
The new version of virt-manager shows the direction that the Fedora team (and consequently Red Hat) is taking with it’s GUI virtualization management tool, and it looks very promising. One of the most interesting things about it is its ability to manage multiple virtualization technologies. On first load, it asks what virtualization backend you want to attach to – Xen or QEMU. For those who don’t know, QEMU is an open source machine emulator for running fully virtualized operating systems, and is what Xen’s code for fully virtualized guest OSes is based on. I imagine it will also manage KVM, although I didn’t install and test it (or the QEMU management, for that matter.) That said, I found it encouraging to see the evolution of libvirt, the underlying abstraction layer that allows a single tool to manage multiple virtual machine mechanisms.
Virt-manager offers several new features to ease management of a server. In the past, one of the tougher things to do with Xen has been to set up the underlying networking layer for the virtual environment, since all the work had to be done at the command line through a variety of scripts and config files. Virt-manager eases this pain with a new “Host Details” configuration page, which includes simple forms and wizards to help set up networking for the virtual environment. In addition, all the hardware settings for a virtual machine are now configurable through updates to the previous “Virtual Machine Details” screens (for each specific guest OS,) including virtual disks, virtual SMP, memory, and network config. These new tools become particularly important when you discover the mechanism for storing virtual machine configuration details is now through a database, rather than text files.
The Not So Good, but Necessary
The Fedora team has followed the Xensource model and begun to store all VM configuration details in a database, referred to as xenstore. This enables the rest of the tools to use one consistent resource for reading configuration details, but also serves to complicate configuration changes. For example, virt-manager now shows all configured virtual machines, whether they are running or not (it used to only show running ones,) which enables starting of paused or off virtual machines from within the tool. The bummer is that a complex configuration requires the manipulation of the database through a series of command line tools. Fedora team has eased this a bit through virsh (a command line tool that leverages libvirt,) through which you can perform the command:
virsh dumpxml [vm name] >diskfile.xml
to get an editable xml representation of the vm configuration. Once the file has been edited, the changes can be applied through the issuance of:
virsh define diskfile.xml
The configuration options are nearly identical to former Xen configuration, only represented in xml, so anyone familiar with Xen will have no problem making the appropriate changes and applying them back to the database. In addition, you can still use the standard Xen “xm” commands with a config file to create/start a vm, but these won’t be added to the database, so when they stop, they disappear from virt-manager.
Fedora 7 is based on Xen 3.1 RC7 (sort of) which should enable new features, such as 32bit paravirtualized guests on 64bit hosts, native 64 bit guests, and the like. You might wonder why it doesn’t include the final 3.1 release – this is due to Fedora’s development freeze occurring prior to the release of the final Xen 3.1. As a result, you end up with a kernel that has much of the 3.1 feature set, but not all of it. For example, I tried a variety of methods to start up a 32bit vm on a 64 bit host – from installing directly, to installing on a 32bit host and copying, to migrating from a 32bit host to a 64bit host – all with the same result: instant death of the 32bit vm. After contacting the fedora-xen mailing list, I discovered that the current Xen kernel in Fedora 7 only has a subset of the 3.1 feature set, and that the next updated kernel (which I’m sure will arrive very soon) will have the remaining pieces. It would have been nice if there was some mention of that in the release notes.
Overall, the tools for Xen management are coming along quite nicely, actually developing a bit faster than I expected, and Fedora 7 is a great place to try them out. They will certainly ease Xen management (and other virtualization technologies on Linux, for that matter) in the future and I look forward to taking advantage of them when they make their way into RHEL 5.1.
My drive is in writing code, and being able to look at other code that has what I want, plain and simple. In that sense, the GPL made it easy to do those two things: all technology is driven by convenience. PHP isn’t popular because of its “enterprise-class frameworks”, it’s popular because it’s easy to grab code from elsewhere, easy to write code in. Windows is easy because it comes with your computer. The GPL made it easy to be open-source.
In the past few years it seems everyone has become a zealot for something in computing, not because they’re a visionary, but because they’re a bully. And to be honest? I don’t really give a [****]. I don’t plan on using licenses for the advancement of some idealogue’s great Cause, and I don’t plan on consulting a lawyer just to write code and see if I’m Compliant.
So in the past few years I’ve released stuff as BSD/MIT/etc. (Gasps.) Do I care that people can use my code and not contribute back to the “community”? Not really. For one, I haven’t found that to be the case. But secondly, it’s just easier. It’s easy to use code and to release code. No Visions, no Causes, no lawyers, no Compliance and papers-please-style-development. Just some guy on the internet putting his code up for use.
The comments came after Slashdot linked to a William Hurley column on the death of software licenses. In that column, Hurley (formerly of Qlusters, btw) says, “current revisions to the GPL are diluting any viral effect it may have had in the past, and distracting us from the real issue: Version 3 is going to distance Richard Stallman and the Free Software Foundation from the developers that make the organization so influential to begin.”
As Microsoft continues to snatch up companies with its patent protection pledges, this issue could come to ahead in 2007. But a lot of the commentary is more emotional heat than any real substance, I’m finding, so I’m asking IT managers to weigh in on how this kind of stuff affects their bottom line. We know how it affects lawyers (it gives them work), and how it affects developers (their livelihoods), but what about the “IT guys?”
Fedora 7 just launched the other day, so let’s throw up a handy tip for installing VMware Server, shall we?
HowtoForge’s into on the install:
This tutorial provides step-by-step instructions on how to install VMware Server on a Fedora 7 desktop system. With VMware Server you can create and run guest operating systems (“virtual machines”) such as Linux, Windows, FreeBSD, etc. under a host operating system. This has the benefit that you can run multiple operating systems on the same hardware which saves a lot of money, and you can move virtual machines from one VMware Server to the next one (or to a system that has the VMware Player which is also free).
Also, with VMware Server you can let your old Windows desktop (that you previously converted into a VMware virtual machine with VMware Converter, as described in this tutorial: http://www.howtoforge.com/vmware_converter_windows_linux) run under your Fedora desktop. This can be useful if you depend on some applications that exist for Windows only, or if you want to switch to Linux slowly.
Step-by-step instructions and some glossy screenshots of the VMware on Fedora 7 install in action can be found at the HowtoForge web site.