Enterprise IT Watch Blog

September 19, 2012  8:02 AM

Why to curate a codebase

Ben Rubenstein Ben Rubenstein Profile: Ben Rubenstein

Image of car engine

Maintenance image via Shutterstock

By Steve Poling (@stevepoling)

“The code is a steaming pile and we’ve got to rewrite it.”

Perhaps you’ve heard this and felt inclined to agree. But before you roll up your sleeves to do a comprehensive rewrite, let’s pause for a moment to ask a question: “Why is the code in this suboptimal state?”

The reasons are many. The original implementers may have been incompetent. Or incompetent bosses did not invest the time to make it right the first time. It is tempting to identify one actor in the history of a software project and make him the scapegoat. But scapegoating can obscure your vision of where the true problem lies, because there may be no scapegoat.

Maybe the badness of the code is the natural result of fixing bugs in the original implementation. People of good faith and adequate skills can approach a codebase with a bug report in hand then make the code work while making the codebase just a little bit worse.

Most times an original design won’t consider the vagaries of reality that come along later as bug reports. The fix often does not fit into the original scheme of the codebase and it gets bolted on like some aftermarket car parts you’ll find in the J. C. Whitney catalog. And because it didn’t fit into the original scheme, it may introduce additional bugs causing additional fixes to get bolted onto the earlier ones. The end result can be a codebase that seems like something from a Mad Max movie.

Don’t make a scapegoat of the guy doing the maintenance. He’s just doing a bug fix with no desire to refactor the existing design. If he refactors the existing design he risks breaking something that’s working. Just get in, fix the bug, get out. If he’s smart, he writes unit test(s) that recreates the conditions of the bug report and verifies the fix did indeed hold.

This accumulation of cruft in a codebase exhibits debt-like behavior. Every bug fix takes longer because the programmer has to sort through extraneous code to get to the problem.  When this overhead gets too large, it’s easy to throw up one’s hands and demand a rewrite. I think this is wrong.

Why not continuously refactor the codebase? Probably because the codebase doesn’t have adequate unit test coverage. Or it has stupid codebase coverage. Stupid codebase coverage means writing unit tests that just exercise code without any real insight into the intent of the code being tested. This usually happens after the code has been written and someone demands 100% unit test coverage. It’s better than nothing I suppose.

Let’s suppose for sake of argument that a codebase has adequate unit test coverage (here I’ll conflate unit tests with functional tests, acceptance tests, and any other tests of anything else I care about). These tests allow the programmer to refactor without fear. His fix need not be encapsulated in its own island of sanity, but can fully integrate with the rest of the codebase that he can adjust to make that integration more graceful. If he sees cruft he can cut it out. If he cuts too deep there’s a unit test that has his back. The result is less technical debt and greater stability.

A codebase is a capital asset. It needs to be maintained like a building must be repainted or a vehicle’s oil changed. We often think of software maintenance as mere bug fixes and feature requests, but it extends to continuous refactoring and automated tests to instrument correctness.

Steve Poling was born, raised and lives in West Michigan with his wife and kids. He uses his training in Applied Mathematics and Computer Science as a C++/C# poet by day while writing Subversive Fiction by night. Steve has an abiding interest in philosophy and potato cannons. He writes SF, crime fiction, an occasional fractured fairy tale, and steampunk. His current writing project is a steampunk novel, Steamship to Kashmir – provided he isn’t distracted by something new & shiny.

September 18, 2012  11:09 AM

This week in tech history: Steve Jobs resigns

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Steve Jobs image via Shutterstock

On September 16, 1985, Steve Jobs resigned from Apple after losing control of the company five months earlier in a battle with former CEO John Sculley. Jobs went on to found the computer company NeXT and also purchased Pixar before returning to Apple and, well, you know the rest.

Each Tuesday, the ITKE team will take you back in time, as we take a look at the events that have changed technology history. Have a tip for us? Email mtidmarsh@techtarget.com or find us on Twitter (@ITKE).

Disclaimer: All posts presented in the “This week in tech history” series are subjectively selected by ITKnowledgeExchange.com community managers and staff for entertainment purposes only. They are not sponsored or influenced by outside sources.

September 17, 2012  12:02 PM

IT Presentation of the Week: Developing SOA strategies

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

As Salesforce.com’s Dreamforce 2012 conference opens this week, we wanted to take a look back at a presentation from Dreamforce’s past that focuses on how organizations can develop an effective SOA strategy. How has your company fared with SOA? What are you looking forward to this week’s show?


September 14, 2012  12:44 PM

GoDaddy outages and business travel: This week in IT quotes

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Business travel image via Shutterstock

 It seems a negative light has surrounded this week’s best quotes from the IT industry. Can you put a positive spin on them?

“Throughout our history, we have provided 99.999% uptime in our DNS infrastructure. This is the level our customers expect from us and the level we expect of ourselves. We have let our customers down and we know it.”
– GoDaddy CEO Scott Wagner expressing his displeasure after 52 million websites were taken down on Monday. Confusion still remains over what caused the outage but it does cause trust issues between a user and their hosting provider.

“It’s going to radically change the way data is stored.”
– Brendan Collins, vice president of product marketing for HGST, taking about its new 3.5 inch, helium-based hard disk drive, which will be coming out in 2013.

“What we uncovered is this infringement on personal lives. That is what has underlined this dislike of business travel.”
– Tricia Heinrich, chief marketing offer of ON24, explaining why business travelers would rather stay at home instead of going on business trips. Over 85% of respondents felt work is a major infringement on personal time while 91% said spending too much time away from home because of work could lead to several consequences. Do you feel work travel infringes on your personal time? Share your thoughts on the Head in the Clouds blog.

“This is yet another clear reminder that otherwise smart people continue to create electronic documents that are both dangerous and discoverable.”
– Jeffrey Hartman of EDiscovery Labs explaining why electronic documents played a critical factor in Apple’s patent victory over Samsung. His take: Don’t create harmful documents in the first place.

September 14, 2012  9:13 AM

Five fun facts on how IT views business intelligence

Ben Rubenstein Ben Rubenstein Profile: Ben Rubenstein

Image of angry bird

Angry bird image via Shutterstock

LogiXML recently conducted a survey of 757 IT prosprofessionals (672 of which classify themselves as IT executives, managers, directors and “IT other”) spread across several industries, on how they view users of business intelligence, and the results are intriguing — and pretty entertaining, too. Among our favorites:

  • 7667 percent of respondents say users make BI needs known by “loudly insisting” (247 respondents, 33%), “screaming like banshees,” (52, 7%) or assuming IT had “telepathy” (203, 27%)
  • 20 percent (151 respondents) would give users direct access to data sources only “if my life depended on it”
  • 38 percent think users spend most of their time “checking Facebook comments on photos from recent Bahamas trip” (96 respondents, 13%) or “wish I knew” (193, 25%)
  • 43 percent (326) are “meh” on implementation of their BI projects
  • 5 percent (35)  think mobile BI is “more popular than Angry Birds”

Check out the full survey report here. What are your thoughts on business intelligence?

September 12, 2012  11:02 AM

YouTube IT video of the week: The History of Spam

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

With a little help from Security Corner’s Ken Harthun, this week’s IT video takes a look at the history of every computer user’s worst nightmare: Spam.

Disclaimer: All videos presented in the “YouTube IT Video of the Week” series are subjectively selected by ITKnowledgeExchange.com community managers and staff for entertainment purposes only. They are not sponsored or influenced by outside sources.

September 11, 2012  2:20 PM

This week in tech history: Stretch computer

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Mainframe image via Shutterstock

On September 5, 1980, the last IBM 7030 ‘Stretch’ mainframe was decommissioned at Brigham Young University. The stretch was known as the first IBM computer to use transistors instead of vacuum tubes and was the world’s fastest computer from 1961-1964.

Each Tuesday, the ITKE team will take you back in time, as we take a look at the events that have changed technology history. Have a tip for us? Email mtidmarsh@techtarget.com or find us on Twitter (@ITKE).

Disclaimer: All posts presented in the “This week in tech history” series are subjectively selected by ITKnowledgeExchange.com community managers and staff for entertainment purposes only. They are not sponsored or influenced by outside sources.

September 11, 2012  10:22 AM

IT Presentation of the Week: NoSQL Databases

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Data management is becoming a growing concern,  and NoSQL is one way to get a handle on it. This presentation from Steve Francia of 10Gen (the company behind MongoDB) discusses how NoSQL can help with managing big data.

After watching the presentation, tell us about your biggest data problem in the comments and you might win a copy of the book, Making Sense of NoSQL!

September 10, 2012  11:40 AM

IT infographic: Is 2012 the year of password theft?

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

From online banking to email accounts, passwords have become a very important part of our everyday lives. This week’s IT infographic from Space Chimp Media shows how this year, those passwords are in greater danger than ever before. What steps are you taking to protect yourself?

For more on password protection, visit the Security Corner blog.

September 10, 2012  9:53 AM

Opportunity Cost: The Real Way to Measure Cloud ROI

Ben Rubenstein Ben Rubenstein Profile: Ben Rubenstein

Image of cost-benefit signs pointing in different directions

Cost-benefit image via Shutterstock

By Brian Gracely (@bgracely)

Somewhere in the last two or three years, between various industry definitional debates about “cloud computing,” we seem to have forgotten how to think about costs for this emerging operational model.

Initially, there were two discussions that focused on cost savings. The first was focused on server virtualization and cost-savings from consolidating applications on under-utilized server resources. The immediate savings came from reduced spend on rackspace, power, cooling and infrastructure. This discussion tended to focus on internal data centers, or what evolved to be called ‘private cloud.’ The second looked at the on-demand costs of public clouds (e.g. Amazon AWS) and how developers didn’t have to wait for new infrastructure to be provisioned before their could create new applications. Savings for this use-case came from the elimination of capital expense (CAPEX) for internal data center resources.

Then as competition between vendors intensified, the cost discussions began to blur between CAPEX, operating expense (OPEX) and opportunity costs. Some vendors claim that costs could be reduced with private cloud but not public cloud. Other vendors claimed the exact opposite. How could this be possible?

As experience and usage of cloud computing evolves, we’re beginning to see a much clearer cost picture emerge. Cloud costs tend to follow these guidelines:

  • It’s possible to reduce CAPEX and OPEX costs by deploying virtualized and converged technologies, along with the ability to automate the operations of those technologies.
  • Those CAPEX and OPEX savings often return to normal levels as the delivery of optimized IT services tends to create more demand for new IT services, as business users see faster response times to new requests.
  • As businesses begin to expect technology to deliver greater advantages in the market, and require IT services, it is expected that IT costs will rise over time, in some cases significantly. The additional costs are focused on impacting the topline (revenues) of the business.
  • The cost to deploy new IT services on public cloud resources are often significantly lower (CAPEX and OPEX) over short periods of time (days, months).
  • When compared over longer timeframes (2-3 years), the costs to deploy applications on private cloud (internal) vs. public cloud (external) are often fairly equal.

When viewed holistically, cloud computing has the greatest potential to impact opportunity costs for the business, delivering increased agility when new market opportunities arise. These opportunities may be short-term or long-term, so it’s important for business leaders and IT organization to create technology strategies that can respond to both types of new opportunities. Failure to be prepared will negatively affect the business’ ability to compete in a given market.

While many companies are looking to deploy operating models that resemble the largest cloud computing environments to reduce costs (CAPEX or OPEX), I would suggest that the more important ROI they should measure is the one based on opportunity costs from potentially missed business opportunities. Access to available IT resources, either via public cloud or private cloud is just to plentiful to miss a great business opportunity because the IT organization can’t properly manage ALL the available resources.

Brian Gracely is Director of Technical Marketing at EMC. He is a 2011/2012 VMware vExpert, holds CCIE #3077, and has an MBA from Wake Forest University. Brian’s industry viewpoints and writings can also be found on Twitter (@bgracely), his blog “Clouds of Change,” and the weekly podcast “The Cloudcast”.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: