Wednesday, September 24, 2008

Automated vs. Manual QA

Bugs, bugs, bugs, every programmer's nightmare. A program may take a month to write, but several months to debug. Even if all the bugs are caught, more are bound to pop up in the most unexpected of places. Fortunately for us, just as in real life, there are tools that we can use to kill those pesky critters once and for all.

Exercise
In this exercise, we used three quality assurance tools, Checkstyle, FindBugs, and PMD. Each one has a different focus, which will be described in more detail later on in this post. Our professor provided a test project for us to try these tools on and we used the Ant build system to run each QA tool on it. Then we are supposed to fix up the code as much as possible, then upload the final build, which I've provided here:

http://finalranma.imap.cc/stack-danielftian-6.0.924.zip

Checkstyle
Checkstyle mainly checks the formatting of the source code to insure that it follows code formatting guidelines. Of course, the guidelines can vary from organization to organization but the important thing is for everybody to use the same one so that they don't have to spend hours trying to read other people's code and can spend the time fixing it instead. In the provided project, there were a few formatting errors that were easily fixed once they were pointed out, such as the position of the curly braces. Checkstyle generates a HTML report that lists all the places where a formatting error has occurred. It will even find problems with Javadoc comments, such as missing parameters and incorrect sentence formatting. However, it obviously won't catch things such as method names and variables that are ambiguous, and these tasks are still better left to real people to verify.

FindBugs
FindBugs specializes in finding bugs that normally won't be flagged during compile time, but might become issues during runtime. The one example that was brought up in the test project was that Integer one = new Integer(1); is much less efficient during runtime as Integer one = Integer.valueOf(1);. FindBugs generates a HTML report that includes detailed descriptions of the problem and the solution in plain English that even beginner programmers can understand. This is a great QA tool that can find problems veteran programmers know about, but less-experienced ones would be unaware of.

PMD
PMD, like FindBugs, focuses on finding bugs that show up during runtime, but I find that it uses a much more robust, but stricter ruleset for finding errors. This can be both a good and a bad thing. For example, when I ran PMD on the test project, it reported that there was an empty catch block and even managed to detect that a method was creating a variable and then returning it immediately when it could have just returned the variable's value instead. It also suggested that certain variables could be made final since they are only initialized in the declaration or constructor. However, one particular error confused me. The description said "Avoid using implementation types like 'ArrayList'; use the interface instead" for the code ArrayList list = new ArrayList;. Because of this one lingering error (I fixed all the Checkstyle and FindBugs errors), I was unable to use Ant to verify the build. Just like FindBugs, PMD will generates a HTML report and provides detailed descriptions for the problems it found, but unfortunately the description pages all link to PMD's website, which is a problem if the computer doesn't have internet access. Also, the description pages provide examples, but they are examples of what the wrong code looks like and doesn't show any examples of what the correct code should be, which became very troublesome in this case since I couldn't find a solution elsewhere.

CodeRuler and QA tools
Our professor also asked us to run the QA tools on our prior CodeRuler assignment to see how well our code held up to conventions. Checkstyle reported a bunch of Javadoc errors, along with several lines being longer than 100 characters. All of the errors that Checkstyle caught were also acknowledged by a peer review of our code by one of our classmates. FindBugs reported that there were no errors, but PMD showed up several, illustrating the differences between their philosophies. PMD reported that one of the methods we had was empty and also to avoid using if (x != y) in order to keep the code consistent with the "if same, else different" philosophy. This problem was obviously not caught in the peer review because it's more of a semantic problem than a potential source for bugs.

Automated vs. Manual QA
So in the end, which one is better? Automated or manual QA? I'd have to say both. Manual QA can catch things that only a human can notice, such as badly-named variables and methods and incorrect formatting for comments. Automated QA, on the other hand, can use the knowledge gained from veteran programmers and catch bad programming practices that would otherwise go unnoticed, but even so, a human being needs to review the errors to deem whether a fix is necessary or not. In the end, by combining the two methods, a programmer can not only write code that works, but also code that is well-formatted and, when compiled, is efficient and better than it could have been without any QA tools.

1 comment:

Freddy Mallet said...

Hello, If you want to go beyond the simple checkstyle and pmd reports, you can take a look to Sonar. This tool aggregates quality information provided by well known open source tools to monitor a project portfolio from a central point.
regards
Freddy