There are a couple of BIND notifications and updates this morning that I thought I’d share with you. The first is a security notification from the Internet Systems Consortium, which oversees the BIND project.
I. Description: ISC (Internet Systems Consortium) BIND 8 generates cryptographically weak DNS query IDs which could allow a remote attacker to poison DNS caches.
This bug only affects outgoing queries, generated by BIND 8 to answer questions as a resolver, or when it is looking up data for internal uses, such as when sending NOTIFYs to slave name servers.
From the ISC Bind security page:
“The DNS query id generation is vulnerable to analysis which provides a high chance of guessing the next query id. This can be used to perform cache poisoning by an attacker.”
All users are encouraged to upgrade (see below — jack)
II. Impact: A remote attacker could predict DNS query IDs and respond with arbitrary answers, thus poisoning DNS caches.
III. Solution: Upgrade or Patch
This issue is addressed in ISC BIND 8.4.7-P1, available as patch that can be applied to BIND 8.4.7.
The more definitive solution is to upgrade to BIND 9. BIND 8 is being declared “end of life” by ISC due to multiple architectural issues. Please see ISC’s website at www.isc.org/sw/bind/bind8-eol.php for additional information and tools. Note that BIND 8.x.x is End of Life as of August 2007.
On that lat note, we have an end of life update (re: 2008 ) from ISC about BIND 8.
Due to the continuing level of effort required to support BIND 8, ISC has decided to change the status of BIND 8 to ‘end of life’.
ISC strongly encourages users who depend on BIND 8 to migrate to BIND 9 as soon as possible.
It’s never easy to retire a product. The security issues of BIND 8 are many, and 7 years after the release of BIND 9, ISC must devote our efforts to maintaining and enhancing the current version. BIND 9 was always intended as a replacement for BIND 8, thus there are no more BIND 8 releases planned beyond 8.4.7-P1, being released today.
Please see ISC’s website at http://www.isc.org/sw/bind/bind8-eol.php for additional information and migration tools.
Blogbeebe took a look at just how many Ubuntu PCs Dell is expected to *really* sell this year and after further review the amount is “not many.”
I’m inclined to agree with that analysis, even in light of SearchEnterpriseLinux.com’s coverage of Dell’s expansion of the pre-installed Linux program into European markets. I’ll try to explain why.
When I headed out to San Fran earlier this month for LinuxWorld, I got the chance to have what was pretty much a one-on-one with Dell execs during a dinner meeting before the show began. A handful of journalists, including SearchDataCenter’s own Matt Stansberry, got to sit toe to to with Dell execs and discuss carte blanche anything and everything they wanted (this included a rather lengthy debate on Grizzly Bear hunting in the wilds of Alaska. Thanks, Matt).
While much of the conversation focused on Dell’s energy saving hardware initiatives (The Next Generation Data Center was being held concurrently with LinuxWorld this year), I took the opportunity to try and get Dell execs to define what “success” means for their pre-installed Ubuntu Linux on Dell hardware program, which was unveiled earlier this year in May.
Why try and define it, especially as the program was being expanded to select European nations?
Well, first off, as I wrote about in my article covering the expansion, the European market *hearts* Linux a bit more than we Yanks in the States, so the move was seen by some as a no-brainer for Dell. Second, and more importantly, Dell refused to give any numbers whatsoever on the program. Instead, when pressed at that dinner, Dell’s director of enterprise marketing Judy Chavis told me that the move to Europe in and of itself was enough proof that the program was working. Perhaps it is, but 20,000 units shipped isn’t even close to the more than 120,000 or so users who demanded that Dell change its ways on the IdeaStorm site.
Multiple sources always help, so here’s Channel Insider’s Scott Ferguson on that same Dell dinner:
So far, it’s hard for Dell to measure the full success of its Linux launch. Judy Chavis, director of enterprise marketing for Dell, said it would take some time for outside analysts and the company to determine the exact number of customers buying and using the company’s Ubuntu PCs. However, judging by the response the company received when the idea was first floated on Dell’s IdeaStorm blog in Feb., the notion of the Linux desktop is catching on with the public.
“A lot has to do with people being comfortable with a Linux desktop,” Chavis said. “What we are seeing are customers who are on their second PC and are looking to give it a try and see what happens. One of the big benefits for us is that the applications are much better on the desktop side then they were several years ago.”
It’s hard to measure because Judy wasn’t telling us :-).
Now, is 120,000 a large number of people? It is by itself, sure, but comparatively speaking it’s kind of pathetic stacked up next to the number of people running Windows XP right now. Its growing, I know. However, it’s also, strangely, much larger than the 20,000 people who have bought an Ubuntu desktop or laptop thus far — as I said earlier, where are the other 100,000?!
Here’s some timely PC news to put that number into context: Taiwan’s leading computer seller Acer will soon take over PC maker Gateway in a $710m deal. According to an article from the BBC, the takeover will create the world’s third largest producer of personal computers, with shipments of more than 20 million PCs and sales of $15 billion. That’s the third largest, and they’re selling 20 MILLION PC’s!
I’ve also seen a fair share of excuses over at sites LinxuToday.com, which is disheartening. It’s disheartening because there’s no conspiracy here, at least on Dell’s part. Theyt aren’t “hiding” the Ubuntu boxes on their web site, and this is certainly not a “project that was designed to fail.” Why? Because the simplest explanation is often the right one. Not supplying Linux woudl have been cheaper than supplying it and then hiding it. People need to get serious here and really look at the reasons why the number sold is only 20,000.
Is it still early, and therefore too early to pass judgment? Maybe. But for all the fervor heaped onto this IdeaStorm coup, there appears to be very little follow up from the Linux community; which had been so passionate in the months preceding Dell’s May announcement.
Maybe they’re all waiting for Hannahkwanzachristmakkah. Maybe 20k is all there is. If so, it doesn’t bode well for Ubuntu’s future on Dell.
What the heck, right? It’s a Monday and I just posted GPLv3 growth info last week, but I’m going to do it again anyway!
August appears to be a watershed month for GPLv3 adoption. Last week I mentioned that the new license saw 14% growth week over week. This week? 19%.
This week has seen a 19% increase over last in the number of projects that have adopted GPL v3. As of 1pm PDT, August 24th, our research indicates that 450 projects have officially adopted GPL v3, as compared to 378 on August 17, 2007. An additional 6 projects have adopted LGPL v3, brining the total to LGPL v3 projects to 27.
Palamida has also been pretty good about informing the masses about which prijects are jumping on board with GPLv3. New project conversions this week include:
GnuPG: GnuPG is the GNU project’s complete and free implementation of the OpenPGP standard as defined by RFC2440 . GnuPG allows to encrypt and sign your data and communication, features a versatile key management system as well as access modules for all kind of public key directories.
GNU CPIO: This project is part of the GNU Project. GNU cpio copies files into or out of a cpio or tar archive. The archive can be another file on disk, a magnetic tape, a pipe, etc.
SIWT: Sudo inventory web-tool (SIWT) is a web interface to view and administer information related to /etc/sudoers files on multiple servers. The database contains data on servers, users, aliases, dates, etc. This tool is helpful for internal audits
August is hot for GPLv3 news, but will it last?
This gem didn’t have a home on any of our sites, but I didn’t want the reporting to go to waste. So it’s going underground on the Enterprise Linux Log. On that note, enjoy some session coverage from LinuxWorld 2007!
SAN FRANCISCO – Everyone in IT uses storage in their data center, therefore everyone will one day have to deal with that storage failing. It could happen at anytime.Even in the moments before your LinuxWorld presentation on demystifying data recovery.
That’s what happened to Chris Bross anyway, roughly five minutes before attendees starting filing into his session on “Demystifying Data Recovery” here at the LinuxWorld Conference and Expo.
Bross is an enterprise recovery engineer with Novato, Calif.-based DriveSavers Data Recovery Inc., and the good news for his presentation was that he had brought along a backup USB thumb drive with a copy of his presentation. All too often however, Bross said IT managers and decision makers are not taking the steps necessary to secure and recover the data.
All storage fails eventually
“All storage is going to fail eventually. All hardware breaks. Are you prepared for the inevitable?” Bross said before conducting an informal poll about who had ever lost data.
A smattering of attendees, Bross included, raised their hand (In addition to losing a USB thumb drive, Bross would later admit that one of his two Ubuntu laptops failed during a shipping snafu).
But the informal poll belied a much bigger problem in data back up and recovery in today’s enterprise; one which Bross set out to diagnose and recover much as he and his staff have done hundreds of times back in Novato with hard disks damaged by fire, water and mechanical defects (the latter being demonstrated with a variety of drive-head-on-platter audio clips from real life recovery efforts at the DriveSavers clean room).
Disaster recovery: the numbers
“For all of the effort [systems administrators] put into assigning employees backup tasks, 60% of all corporate data today resides on unprotected PC desktops and laptops,” Bross said, citing industry research from Rochester, N.Y.-based Harris Research.
And when natural disasters strike – and they will, despite the disagreement over the disaster recovery between business executives and IT staffs — the track records of today’s data centers is poor.
According to a study from the University of Texas, U.S. small and medium-sized businesses have shown that when they lose data in a natural disaster, 50% never reopen and 90% are gone in two years. Bross said the hourly cost to “recreate” these battered data centers can run anywhere from $50,000 per hour to $2 million per job at large eCommerce sites.
The reality of reliability
Bross said common knowledge in data centers is that the mean time before failure (MTBF) – or “mean time to failure” – for a typical hard drive is between 500,000 to 1.5 million hours. In an ideal environment, the annual rate of failure of any given drive is .88%.
But two studies from Google Inc. and Carnegie Mellon disagree. In both studies, real world testing of drive reliability found the actual annual replacement rate was actually 3-8%. On top of that revelation was word that failure rates double after the first year of service. For drives older than one year, Bross gave simple advice: “If you experience a drive error of any kind, pull the drive. It’s better to be safe than sorry,” he said.
Those were just mechanical failures though; like natural disasters and virus corruption. Truth be told, studies have shown that user error is by far the biggest contributor to data loss. Fully 60% of all hardware failure is the result of the user, Bross said, which includes malicious/accidental deletion of code, incorrect RAID configuration, accidental reformatting and bad maintenance.
Data protection and the inevitable
Bross concluded his session with a number of tips and best practices for systems administers to use in specific situations.
Hurricanes and floods – “Remember if want to preserve data, you’ll want to make sure that the drive is kept wet,” Bross said. “Storage needs to remain wet. If it dries out, there’s lots of calcification and mineral deposits that can form and cause havoc.” Bross instructs all of his customers to keep wet drives submerged and cool.
Data has been damaged, now what? – Rule number one: Don’t panic. Evaluate the failure and check the status of you backup, Bross said. “Don’t run repair utilities on it. Don’t reformat the volume. Don’t restore backup to the drive in question. Don’t remove drives from a RAID system or rebuild it. Instead, cool heads will prevail and you should evaluate and check the backup drive first, he said.
RAID is not equal to backup! — RAID, by its definition, is a redundant array of disks. “The reality is that a RAID device is only part of the backup solution,” Bross said. “RAID is good for one thing and it’s not as the primary backup application–it’s fault tolerance,” he said.
DIY data recovery – Doing things on your own is good for corruption, deletion or logical corruption of volumes. However, Bross warned that this approach is bad for hardware damage or complex configurations.
Local service providers – This is a good option for transfers, but not for data recovery (this category also encompasses a local expert).
Professional data recovery services — For mission critical data where risk is not an option. Employ clean room facilities as required by drive manufacturers. Bross said that even in these pristine professional conditions “not every patient makes it though this ‘ER for hard drives’.”
Remote access services — Cannot be used with physical device failures. There is the potential risk to customer data that hardware or logical volumes could degrade during diagnosis. Potential benefits are a quick resolution and recovery of data. And there’s no need to ship hardware to lab, either.
Bross said users can avoid needing these data recovery strategies in the first place by making complete backups regularly. “Have a process and a schedule. Assign responsibility and create a chain of command,” he said. “All storage eventually fails. Run home and backup your data now. Be happy you did and sleep well tonight.”
Are people still terrified of SELinux? Of its complicated policy module creation and rules by the fist mentality over Linux systems? Oh right, they are. That’s why over the past year every conference I’ve attended had a session about SELinux and how much easier it is to use than it was last year.
Red Hat Magazine editor and SELinux guru Dan Walsh:
“Who’s afraid of SELinux? Well, if you are, you shouldn’t be! Thanks to the introduction of new GUI tools, customizing your system’s protection by creating new policy modules is easier than ever. In this article, Dan Walsh gently walks you through the policy module creation process.
A lot of people think that building a new SELinux policy is magic, but magic tricks never seem quite as difficult once you know how they’re done. This article explains how I build a policy module and gives you the step-by-step process for using the tools to build your own.”
Hmm, magic. Good one. I think when SELinux does work as advertised you’d be hard pressed to find a Linux administrator who doesn’t attribute some of that success to the Black Arts.
Does SELInux work? Is it really powerful? You bet it is, but maybe *too* powerful since users are routinely switching it off when it doesn’t allow them to do anything with their own systems.
Luckily for you RHEL users out there, Walsh goes beyond magic tricks and lays out a step-by-step explainer for SELinux policy module creation in his latest article at Red Hat Magazine. He advises users to start small, use new tools like polgengui, and then he just goes crazy with the steps (complete with screen grabs for the visual learners, like myself).
It’s a good read, and if my experience with Walsh is any indication (I’ve seen his presentation at the Red Hat Summit), there will be more to follow.
In a post that is sure to generate its fair share of headlines and “constructive criticism,” Ross Brunson over at the Linux in Novell’s East Region generated a list of the top five reason to migrate from Red Hat Enterprise Linux to SUSE Enterprise Linux.
Yeah, I know. I could envision the flame war comments and trackback posts before I even finished reading the headline too.
But more seriously, Brunson dishes out the points using categories that I think any IT manager out there today could understand: cost, management, interoperability, customer satisfaction and deployment.
Here’s a sample, using an example that is core to the entire “Microsoft-Novell partnership is good for you, really!” argument:
Interoperability - Novell started life in the pre-Open Source days, it’s got a huge patent portfolio, years of closed-source product development and many customers who use those products. Red Hat was begun to be and is aggressively Open Source, even when it doesn’t make sense, they have to adhere to that ideal. Novell enters into and works hard on agreements that increase it’s interoperability with other environments and makes it easy to just get things working. Novell’s agreement with Microsoft is a good example of two organizations that aggressively compete also setting aside differences to make the customers life easier.
There are four more where that came from, so get yourself over to Ross’s post for more information on those other topics, including YasT and virtualization.
Do you Agree? Disagree? Have you experienced first hand the benefits of Novell’s “deal with th devil?” Even one year later, this deal still invokes a religious fervor amongst the Linux community (although its noticeably less fervor-ish that it was in November 2006).
Apparently Linus Torvalds, father of Linux, did not read my blog post about the GPLv3 the other day. In that post, I went over an email I had received from Palamida, a West Coast IP vetting vendor, which disclosed that the GPLv3 was enjoying 14% project adoption growth week-over-week.
In an interview with the EFYTimes, Torvalds continues to poo poo all over the GPLv3, much in the same way as he has done for the better part of the past year. “In the absence of the GPLv2, I could see myself using the GPLv3,” he said. “But since I have a better choice, why should I?”
Some big proponents of the GPLv3 have been our friends over at Boycott Novell, a blog that came to be as a result of the Microsoft-Novell partnership formed last November. For the positives of the GPLv3 I encourage you all to visit that site on occasion to get an impassioned take on why the world needs version 3. For a negative take, well, I’m sure you’ve heard of Google and Linus Torvalds. Combine the two to find a trove of information from the detractors. Is that fair? Maybe not, but the GPLv3, for all its success thus far, still has an uphill climb ahead of it.
The webcomic xkcd takes on Black Hat Linux support in its latest comic (“Rule 34″).
You can check out the full comic in the link above, and more xkcd comics here.
xkcd’s other Linux-related fare is pretty good too.
[Hat tip Alex for the link!]
California-based IP gurus Palamida emailed me this week with some intriguing GPLv3 information that I thought I’d share with everyone this morning. Apparently all the GPLv3 haters can go to lunch, because the little license that could is seeing adoption rates of approximately 14% week-over-week.
In the mailing, entitled “Get ready — it’s ramping up,” Palamida’s Melisa LaBancz-Bleasdale details the growth of the GPL’s third incarnation, which went live last month.
“This week has seen a 14% increase over last in the number of projects that have adopted the GPLv3. AS of 3pm Pacific time, August 17, our research indicates that 378 projects have officially adopted GPLv3, compared to 3332 projects on August 10, 2007. An additional 8 projects have adopted the LGPLv3 bringing the total LGPLv3 projects to 21.”
Then Palamida goes and makes my day with a handy chart. Charts, as anyone who reads Digg.com can tell you, make things easier to understand, digest, and therefore flame (if you don’t agree with them, usually by means of another chart that shows the exact opposite of what the first says). See also: Pictures.
Palamida also takes the time to mention some of the latest GPLv3 conversions. They include:
libchart: a chart creation PHP library that is easy to use. It can generate bar diagrams or pie charts. It is compatible with PHP4/5 (compiled with GD and FreeType) and has no other dependencies
itools: a collection of Python libraries which provides a wide range of capabilities, including an abstraction over directory and file resources, a search engine, type marshallers, datatype schemas, i18n support, URI handlers, a Web programming interface, a workflow interface, and support for data formats such as (X)HTML, XML, iCalendar, RSS 2.0, and XLIFF
librapiddev: a collection of helper classes and tools to speed up your development of SDL/OpenGL projects
Additionally, Palamida notes that there are currently 4,748 projects with licenses that now read “GPL v2 or LGPL v2.1 or later.” For a complete list of projects that have adopted GPLv3 and LGPLv3, see http://gpl3.palamida.com.
Honorable mention: There are now over 5,000 GPLv3 (and related) projects (as a combined number from: GPLv3, GPL v2 or later, LGPL v3, and LGPL v2.1 or later). Not too shabby.
It’s always a fun morning when the Samba team fires off another stable production release of their namesake open source project. Today, Samba’s Jerry Carter mailed the Samba mailing list with an update on 3.0.25c — it’s available!
This is the latest production release of the Samba 3.0.25 code base and is the version that servers should be run for for all current bug fixes.
Major bug fixes included in Samba 3.0.25c are:
- File sharing with Widows 9x clients.
- Winbind running out of file descriptors due to stalled child processes.
- MS-DFS interoperability issues.
The source code can be downloaded from: http://download.samba.org/samba/ftp/
The release notes are available online at: http://www.samba.org/samba/history/samba-3.0.25c.html
Binary packages are available at http://download.samba.org/samba/ftp/Binary_Packages/