A Science of Good Software?

Please read the following article.  It’s dated but still valid.  If you wish, you can read only until you are done with accidental vs. essential complexity, but I recommend the whole thing:

http://www.cs.nott.ac.uk/~cah/G51ISS/Documents/NoSilverBullet.html

Software may not be buggier than other products.  Errors rise with complexity, so software may have a comparable defect ratio, but a greater number of defects due to its complexity.  In other words, if other products were as complex as software, they may also be as defective.  If true, our problem is complexity and not software.  Specifically, it’s how we make mistakes because of complexity.

First, we need to measure complexity.  Lines of code (LOC) seems obvious, but it has some problems.  First, what is a line of code — are brackets, directives, declarations, etc… lines?  Second, shorter code is sometimes harder to understand than longer code; check out an abbreviated programming contest sometime.  Fortunately, there are some alternative metrics.

The first is Cyclomatic complexity which measures decision paths:

http://en.wikipedia.org/wiki/Cyclomatic_complexity

The second is Halstead which measures operations. By this foreach should be simpler than for… it is!

http://en.wikipedia.org/wiki/Halstead_complexity_measures

For an overview of programming complexity and measures:

http://en.wikipedia.org/wiki/Programming_complexity

Also, while not programming related, this shows how a complexity metric could help with quality improvement.  The key here is readability and this helps emphasize that it’s a person’s ability to comprehend the code that matters.  In fact, I used these statistics (built into Word) to simplify this article:

http://en.wikipedia.org/wiki/Flesch-Kincaid_Readability_Test

Once measured, then what?  Remember (from the article) we have two kinds of complexity: accidental and essential. The first can be reduced; the second can only be managed.

For accidental complexity, write and simplify the code according to metrics.  This fits nicely with refactoring, which can be redefined as converting complex code to simpler code, measured by a metric. Imagine – a number for how good or bad the code is!  Here is an idea of what an approach could look like:

http://en.wikipedia.org/wiki/Design_predicates

We handle essential complexity by dividing the system into black boxes, each of which may be split into more black boxes and so on. Each black box fulfills a public contract and assumes nothing of its environment.  I will call this approach BBM.  We program only to the contract which becomes our world.  If we look at the big picture, we don’t see details, and if we look at details, we don’t see the big picture and those are simple.  It was only complex when we had to deal with both.  Incremental development, enhancement, and third party re-use become easier.

BBM took extra work which made the system as a whole more complex, but it’s easier to understand, so it’s simpler to us.  But doesn’t this contradict our measurements?  Not if we take scope into account.  If we use the measurements within our architecture we are fine.  If we want to simplify our architecture, then we can use the measurements at a higher level.  Still, are there complexity measures that take architecture into account?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s