Mike Cohn, one of the leading authorities on agile methodologies, just happens to live in Lafayette, CO, a Denver suberb. This is lucky for those of us that live in the Denver area because it means we occasionally get to see him speak at the Denver Agile User Group meetings. Such was the case last week when Cohn gave us a preview of the keynote he will be giving at this the Agile 2010 Conference being held August 9-13 in Orlando, FLA.
In his presentation, Cohn reminds of the progress that’s been made in agile. Though it’s not a silver bullet, organizations that are using agile have reported productivity improvements. “Agile is not something you become; it’s something you become more of, ” Cohn stressed. He then added, “For most of us, it’s about becoming more than we were.” Cohn challenged us to “raise the bar on each other,” finding ways to incrementally improve. “It’s still about continuous improvement,” he said. Cohn then talked about how we should be ADAPTing to agile with:
- Desire to Change
- Ability to work in an agile manner
- Promote early successes to build momentum and get others to follow
- Transfer the impact of agile through the organization so it sticks
Check out this video where Cohn tells us about ADAPTing to agile.
Over the holiday weekend YouTube users fell victim to a series of coordinated attacks by a team of black hat users. Their antics ranged from redirecting unsuspecting YouTubers onto malware sites to plaguing them with pop-up ads for adult Websites. The group used a little-known weakness in YouTube’s comment field to place poisoned HTML tags that caused the pop-ups to appear uncontrollably, eventually forcing Google (YouTube’s owner) to step in and shut down the comment features and other aspects of the site.
Allegedly, the attack was coordinated by an online team of pranksters known as the 4chan (pronounced Fortune). The group used a flaw in YouTube’s comment box that normally restricts the amount of HTML allowed, by simply adding additional script tags. Usually when these tags are used, YouTube denies the entry of script tags but 4chan members discovered that by adding, repeat secondary script tags that only the first ones were stripped out leaving the secondary tags intact and thus allowing the hackers to do their worst to viewers of YouTube’s most popular videos.
Over the past year, SearchSoftwareQuality has featured a number of tips designed to help enterprises test and prevent defects related to cross-site scripting (XSS) vulnerabilities. These tips describe professional ways to shut down and secure potential XSS problems from Internet hackers. Using these tips will help secure your enterprise applications from similar attacks and prevent online mischief as XSS vulnerabilities truly are realistic problems.
XSS testing and prevention tips
Cross-site scripting (XSS) explanation
Cross-site scripting issues are a type of validation weaknesses in a Web form. Though XSS issues can be fairly easy to fix, avoiding them all together is key.
Beating software’s cross-site scripting, authentication problems
Web security expert explains where security efforts are best placed. By checking for cross-site scripting and authentication mechanism weaknesses you can eliminate problems in your application.
Finding cross-site scripting (XSS) application flaws checklist
Cross-site scripting (XSS) is a major concern, it can be unpredictable and requires multiple tools to test for it. An expert sheds light on the history of XSS issues and recommends tools to prevent XSS application issues.
When it comes to lab management, administrators and infrastructure managers frequently think of workload provisioning tools such as VMware’s Lab Manager, VMLogix LabManager or Windows Deployment Services to provision servers. When it comes to the hardware side of lab management, we frequently think of advanced management offerings in blade servers or integrated management processors like the Dell DRAC or HP iLO.
A well-rounded lab will also have the opportunity to manage the connectivity between servers and storage. This is what I observed recently when touring the hardware integration lab at Hitachi Data Systems (HDS) in San Jose, CA during HDS Geek Day 0.9. There, a wide array of HDS and other parties’ products are in use for a number of test functions. Part of the HDS offering includes software to help manage over 30 different storage-centric offerings for features such as replication, tiering and protection. The lab was over 36,000 square feet in size and if you are primarily providing testing on storage you will quickly see that a large amount of storage I/O is being managed in the HDS lab.
HDS’s approach is to utilize the Gale Technologies Lab Manager product for managing the storage connectivity. We focused on Ethernet and fibre channel interface management during our tour of the HDS lab, but RF/Coax, POTS/analog telecom, T1 and DSL connections are also available for connecitivity provisioning. This lab management product coupled with a collection of physical layer switching can dynamically assign different servers and storage resources to each other via fibre channel. This means that the lab operators can quickly disconnect a server’s fibre channel host bus adapter (HBA) assigned to a switch port and associated storage in favor of a new assignment without visiting the server. A robust switching environment can move a port between switch zones with a soft connection; but each interface would always require a switch port.
During the lab tour, storage expert Chris Evans and I were impressed with the rapid ability to assign a physical path to the servers and storage in question. This can be used not only to simply add connectivity from a server to an associated storage device, but also simulate dropped fibre channel links within a pool of links. This can test various multipathing functions to ensure there is no data loss.
Do you have a need for physical layer connectivity management in a lab capacity? Share your comments below.
Earlier this year, Coverity announced the fifth release of its defect investigation software tool, now called Coverity 5. Coverity 5 is basically a rehashing of its existing code analysis engine, process tools and debuggers, but with some added features and improved interfaces.
Recently, I spoke with Behrooz Zahiri, Coverity’s self-proclaimed perfect storm of code engineer and market analyzer. Zahiri summarized where Coverity 5 came from in three words: “our market research.” Coverity has been running audits of software development organizations to find out where common defects are found, which tools are used to discover these defects and how these issues are resolved. From their research, Coverity built a database of common defect areas and solutions, which it integrated into its latest software.
This new research-derived database serves two purposes: one, to work as a resource for developers so they can devise code with fewer errors; and two, to use the information in the database, once Coverity’s analysis engine is engaged, to find issues quicker and recommend common solutions to the problems found.
“When we decided to revamp Coverity 4, we had two goals that we wanted to reach. One was to make certain that our product was scalable and could deliver quick results for teams regardless of the team’s size. But we also wanted to elevate the adoption of our tool by increasing the effectiveness of defect repair,” said Zahiri.
Coverity’s new defect manager is built on a Java platform that has direct access to popular open source tools that have been approved and entered into its database, which also allows for plug-and-play of these pre-approved tools. The interfaces have also been updated and Coverity believes them to be more business-focused than prior versions.
Also new is a feature Coverity calls Defect Impact. This feature evaluates Coverity’s analysis of source code for defects. If defects are found, it prioritizes how they should be resolved by grouping them under three levels of severity (high, medium and low).
“High-risk defects negatively affect multiple parts of an application,” said Zahiri. “Many of these cause unexpected behavior throughout the application and alter the application’s memory management. Medium-level concerns are often performance dampeners. These defects allow the application to run, but at less-than-optimal speeds. And lastly, there are the low priorities — these are usually warning tags or artifacts in the code. Sometimes low-priority defects can be left alone, as the cost to resolve them doesn’t reflect a true positive return on the investment and they normally don’t hurt the application enough to matter.”
Defect Impact is broken into two features: a static analysis tool that supports analysis for applications built with C++, C# and Java, as well as dynamic analysis, which currently only supports Java.
Static analysis uses map checkers, which will seek out common areas where defects occur and search for problems — this is the primary way that risks are assessed in the final report generated by the tool. Dynamic analysis, which, again, is only for Java at press time, looks for concurrency issues and lockups in the user interface.
According to Zahiri, Coverity’s closest competitor is Klockwork, which he says is a similar offering that is priced and marketed very differently. Zahiri also admits Coverity has crossed paths with Polyspace and Parasoft from time to time.
For more information on Coverity check out these articles:
Coverity introduces build analysis tool, new Integrity Center (Apr. 14, 2009)
Coverity this week announced a new software build analysis product along with the bundling of all its software analysis products into one offering called the Coverity Integrity Center.
Coverity releases open source application architecture diagrams (Feb. 17, 2009)
Coverity’s new Scan library of open source software project “blueprints” can help software pros shave time off development and testing.
Coverity creates program to enforce code adherence (Nov. 25, 2008)
Coverity introduced Coverity Architecture Analyzer, which validates software architecture and detects potential security vulnerabilities.
Inside information from Coverity points to large-scale announcement in early July so stay tuned.
Are you having trouble implementing ITIL in your organization? Or perhaps you’re new to ITIL and want to learn more? Then you will benefit from two new books: Implementing ITIL Configuration Management and Implementing ITIL Change and Release Management.
I was able to talk to author Larry Klosterboer at the IBM Innovate 2010 conference last week to find out more about his books. In this video you’ll learn Klosterboer’s number one piece of advice for those organizations implementing ITIL Change Management.
There is a growing interest in agile; but other than a short certification class, it might be hard for people to gain necessary skills and experience to become successful, particularly if their work group is not practicing agile. The Agile Skills Project is a group that formed with the idea of creating an inventory of agile skills and forming a community of those interested in both practicing and enhancing their skills. I’d created a similar group called Beyond Certification last December.
Recently, there has been some discussion of the future of the Agile Skills Project. So, Cory Foy suggested an agile innovation game, called Prune the Product Tree, in order to explore ideas for the group. Foy explained his goals, saying:
“Basically I want to get two-to-three groups of five-to-eight people each to play the online game. It takes about 30 to 45 minutes, depending on the conversations we have; no special software or cost. I’m a facilitator for the games, so I’ll host and run the game; just need the interest and time.”
Foy has scheduled three sessions:
- Wednesday, June 16th, 2010 at 6am EST / 11 a.m. GMT
- Wednesday, June 16th, 2010 at 10pm EST / 3 a.m. GMT
- Thursday, June 17th, 2010 at 3pm EST / 8 p.m.
Over the past week, I’ve been blogging about some of the highlights I’ve experienced while attending the IBM Innovate conference. The conference presented opportunities to both learn and play and included IBM announcements, keynotes from high-profile speakers, over 350 technical sessions, excursions to Epcot and Disney’s Hollywood Studios, and last — but not least — the opportunity to meet and talk with people who are energized by innovation and the role that software will play in shaping our future.
I caught up with several IBMers after the conference to ask about their impressions. Listen in to what they have to say:
Innovate 2010 wrapped up today with a keynote from Walker Royce focusing on “econometrics that are core to continuous improvement.”
Royce’s message was clearly one of “breakthrough agility,” speaking out against the waterfall approach which he labeled as “geriatric.” He promoted integration test before unit test claiming that this would give teams the ability to catch “malignant errors early in the cycle,” saying that key metrics such as defects, scrap and rework were improved when using agile methods rather than conventional approaches.
To be honest, I was surprised that IBM took such a strong and public stance against the traditional waterfall methodology. I attended a Tuesday breakout session, “Quality in the Trenches Panel: Traditional? Agile? Something Else?” with Terry Quatrani and Scott Ambler, and in that session, too, the underlying message was clearly one that screamed “agile is the better approach.” The debate was pure tongue-in-cheek with Ambler in suit and tie arguing the waterfall approach and Quatrani with Mickey Mouse ears and casual dress arguing for agile, much in the style of Mac vs. PC commercials. However, the humor was clearly mocking the waterfall approach as an approach riddled with inefficient processes and an over-emphasis on documentation and lack of collaboration.
Though I knew a lot of agile was going on within IBM, with Big Blue having more of a suit-and-tie reputation, it surprised me that their message was not just pro-agile, but anti-waterfall.
At the keynote, Tim Lyons of Nationwide spoke about his implementation of an agile methodology.
“We thought we had standardized practices, but when we applied metrics they weren’t as standardized as we thought. Metrics provided the insights to dive down into multiple levels of standardizations and truly get into best practices,” said Lyons.
Nationwide adopted an onshore agile, lean model, operating with CMMI level 3 team and found it more cost-effective than an offshoring model.
Agile and CMMI combined? That was something I hadn’t expected to hear. Royce confirmed I was not alone in this viewpoint when he got back up on stage and said, “Many people think they’re opposing, but they can be used together.”
CMMI (Capability Maturity Model Integration) is a model that helps organizations attain continuous improvement of their software development processes. My experience with CMMI is that it’s quite rigid, requiring thorough documentation of standardized, repeatable processes. Agile is adaptable. The Agile Manifesto promotes “Individuals and interactions over processes and tools” and “Working software over comprehensive documentation.” With CMMI being documentation-heavy with strict adherence to process, I have a hard time imagining this being used in a purely agile environment which promotes adaptability and change. Nevertheless, Nationwide is using both and seeing positive results.
What do you think? Are agile and CMMI a good mix or are they opposing in nature? And what about waterfall? Is the methodology dying?
Though there have been many great speakers at the IBM Innovate conference this week, I’d give Dean Kamen, who spoke at this morning’s keynote, the prize for being the most inspirational. Kamen is the owner of a company called DEKA, credited with a number of life-changing inventions, many in the health-care field, including an insulin pump, a mobile dialysis machine, and the iBOT — an all-terrain wheelchair that will allow owners to go up and down stairs and rise up so that they can be eye-level with those who are standing. (It was technology from the iBOT, by the way, that was used in an invention which Kamen is probably best known for — the Segway.) Kamen also spoke of the advanced prosthetic arm that DEKA developed, with funding from The Defense Advanced Research Projects Agency (DARPA), for veterans, some of whom have lost both arms.
Kamen stressed the amount of software included in devices and joked that when dealing with medical products, you can’t afford to get The Blue Screen of Death! The importance of software quality is underscored in medical devices and quality of embedded software and “systems of systems” has been a common theme throughout the conference this week.
Kamen had a very humble demeanor as he described his hopes for a better world. He gets a laugh from the crowd in this video clip as he tells us “In my company, if I want people to listen to me speak, I have to pay them.”
Kamen described projects to purify water and provide power to under-developed communities. What he seemed most passionate about was his commitment to FIRST(For Inspiration and Recognition of Science and Technology), an organization which encourages the worlds’ youth, particularly girls and minorities, to pursue education in science and education. Kamen described the history of FIRST, telling us it started with the need to have a “Shaquille O’Neal of science and technology” rather than having kids think of those who like science as crazy frizzy-haired nerds. With support from the White House, the program started in 1992 in a New Hampshire gym. When it got too big for the gym, the venue was moved to Epcot, from there to the Houston Astrodome and now is held in the Georgia Dome with participation from over 150,000 kids from around the world. Kamen relays the story of George W. Bush proclaiming, much to Kamen’s embarrassment, “This is just like WWF, but for smart people.” The saying caught on, and turned out to be fantastic marketing slogan for the organization.
Kamen reminded us of a quote by William Butler Yeats: “Education is not filling a pail… it is lighting a fire.”
His talk ended with a plea to the group, ” The technical community has to have a voice in the hearts and minds of kids.”
Based on the applause and standing ovation, I’d say he touched the hearts and minds of the audience and inspired us to find ways that we can help our kids develop the skills to better the world.
Grady Booch’s title at IBM is Chief Scientist of Software Engineering which, he says, “basically means I get to do what I feel like doing with software engineering.” IBM acquired Rational Software in 2003, where Booch was working as Chief Scientist. Whatever his title, he’s considered somewhat of a “rock star” among software engineers. I was tempted to ask for his autograph when I had the opportunity to speak with him at the IBM Innovate conference this week, but resisted “groupie behavior.” Booch is best known for his work, along with Ivar Jacobson and James Rumbaugh, on developing the Unified Modeling Language (UML). He’s also quite well known for his work on design patterns and has authored several books on UML and object oriented design and analysis.
Though he’s an icon in the world of software development, he chats easily with the crowd at the Innovate conference, with a laid-back smile that sets people at ease.
In this video clip, Booch gives his thoughts about cloud computing and what he sees as future trends in this space.
Booch was one of the speakers at this morning’s keynote at the conference. The title of the presentation was “Imagine!” and included the many exciting opportunities for innovation that are being explored at IBM. More predictive weather forecasting, improvements in DNA sequencing, and the use of Zinsight were just a few of the projects Booch mentioned. Booch described the use of Second Life to allow virtual teams to collaborate and demonstrated with a short video of avatars working in cyberspace on an agile project.
I didn’t find the video that Booch showed at the keynote on YouTube, but I was pleased to find a series called “Rational is Agile,” including this video in which you will find the avatars of Matt Holitza, Scott Ambler and Grady Booch.