The Pain of Fixing Code

I’ve been dealing with bugs galore lately; memory leaks, overflows, etc. and it has brought forward some discussion about the difficulty in finding developers who are able to recognize flawed code, let alone make the time to repair it. I was looking for some data on quantifying the source of the problem (yes, metrics) and found this insightful article from a 1998 IEEE journal:

In the first study on the subject, Sackman, Erikson, and Grant found differences of more than 20 to 1 in the time required by different developers to debug the same problem (“Exploratory Experimental Studies Comparing Online and Offline Programming Performance.” Communications of the ACM, January 1968). This was among a group of programmers who each had at least 7 years of professional experience.

Productivity issues are certainly a concern. Managers often time-box release of code to the point where beta is production. In fact, companies like Google make beta sound so good you have to wonder if people care anymore about the concept of “finished” products. To their credit, and from a historical perspective, IBM had a similar approach and used to include an engineer with their high-end processing platforms to monitor and resolve issues on the fly (e.g. the systems were too complex for anyone to try and manage without pre-qualified professional help). I was always surprised by this and wondered if someone had investigated how to pack an engineer in the crate so she/he would just pop out and start working on the system as soon as it was plugged in. Similarly, the big V12 power-plants of the luxury cars perhaps were not really expensive because of the quality of the build, but because the things never truly had independence from the mechanics (go for a drive, go get a tuning…repeat).

Tom DeMarco and Timothy Lister conducted a coding war game in which 166 programmers were tasked to complete the same assignment (“Programmer Performance and the Effects of the Workplace,” in Proceedings of the 8th International Conference on Software Engineering, August 1985). They found that the different programmers exhibited differences in productivity of about 5 to 1 on the same small project. From a problem employee point of view, the most interesting result of the study is that 13 of the 166 programmers didn’t finish the project at all—that’s almost 10 percent of the programmers in the sample.

So maybe this is a stupid question, but do humans really classify dependability and repeatability as value/benefit worthy of expense? I think the answer is that we spend when we are confident in the return, and we only look for quality when we are in fear of the unknown. Fast food restaurants, for example, can spend on infrastructure because it is the obvious way to reduce cost for a volume of deployed meals that covers that investment. Ford thought this way, as did Edison. People look for the symbols of the industrialized product not for dependability or quality in an absolute sense, but only in relative terms to the other options (that depend on their point of reference). I could continue down this line of reasoning, but in a nutshell I guess my point is that I am finding it is reasonable to expect improvements in code quality only in development environments that understand defect tracking and resolution; the same as expecting quality of life in governments that understand justice and liberty.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.