Bad code != Bad person

I’ve written bad code in the past, and I will guarantee that I will write bad code in the future.  It is, unfortunately, inevitable.

The fact that I have written bad code and that I will continue to occasionally produce bad code does not make me a bad developer, nor a bad person.  I should feel ashamed when I write bad code, and I should put forth the effort to improve the code when I can.

Why do we write bad code?  I believe there are a couple of places that bad code comes from.

We don’t know any better

This category applies to more than just junior developers.  Our industry is constantly changing.  There are new tools and techniques developed all of the time, and we don’t always know the latest “best practices”.

This category also applies to any new technologies that we use.  When we first pick up a new technique, technology or tool, we don’t know the best way to use it, and we will make mistakes.

We need to be diligent to pay attention to the mistakes we make.  We need to be open to feedback from others (even though it can be hard to hear).  And most importantly, we need to be open to change when we find out that something isn’t working as well as we had hoped.

We are in a time crunch

I hate this excuse, but I will admit that I have used it.  I would love to live in a world where we can take the time necessary to do things “the right way”, but we don’t always have that luxury.

The key with this category is that we need to be very careful when we decide to use this as an excuse for bad code.  We all know that bad code is going to slow us down in the future, even if it gives us a short productivity boost in the present.

When we make the trade of productivity for quality we need to ensure that we go back and improve the quality when we have a little more time.  We will rarely have the time to go back and rewrite an entire application, or even parts of it, but we should be able to slip in refactorings as we are working on code that will incrementally improve the landscape.

As professionals, when we know that there is a portion of the code that is sub-optimal, and there is work to be performed in that area, we need to raise this concern.  We need to make the case that we need a little longer to do the work so that we can improve the code.

We are learning about the system/domain

I have seen this class of issue many times.  Typically this class of error presents itself as a poorly, or improperly modeled domain.  As we are beginning a new project, we often don’t know as much about the domain as would be optimal for modeling it.  We do our best to model it as closely to reality as possible, but we almost always learn that we got something wrong, or didn’t write it at the correct level of abstraction.

This is one of the hardest type of bad code to fix since it is often an integral part of the architecture.  I believe there are ways to incrementally improve our architecture to better match reality, but we need to do this very carefully.

We are working on a legacy system

The last category that I see happens as we are working on a system that has been around for a little while.  The code starts out clean and then a change request is made.  We make that little change, and the code still looks good.  Then another request and another change, and so on and so forth.  After some time, these changes begin to build up, and at some point we end up with a class or a method that is no longer “good code”.

I am as guilty of this as anyone else.  I try to keep my eye out for this sort of degradation of the code and will either spend a little extra time on a given task to clean it up, or if there is no “extra time”, I try to keep track of these “code smells” in our issue tracking system.

Some organizations do not like the idea of adding technical debt cards to the issue tracking software.  I worked in an organization like this for a while.  Rather than writing these cards up in the issue tracker, we created physical cards with code smells on them.  When a new card was created, we would review it at our iteration kick-off so everyone was aware of the issue.  Then, if we were working in that area on another card, we could grab the technical debt card and try to make that change as well.


I think the most important take away here is that we need to take ownership of the fact that we are going to write bad code sometimes.  We need to admit it to ourselves, and realize that this does not make us bad programmers, or bad people.  What makes someone a bad programmer is the refusal to improve.

Explaining strongly typed and weakly typed languages to a 12-year old

Literally.  My son is 12 and has been working on a Python text adventure this summer (on and off).  The adventure started off as a single script, and he has been transitioning it to use objects.  My husband and I have been helping him figure out what belongs to what object, and how to transition his code.

Blair and I had been thinking in Java terms since neither of us are familiar with Python, and were thinking that our son should move toward some type of interface type system so he could have “weapons” and “monsters” and you could treat any weapon as a generic Weapon.

We realized that since Python is weakly typed, this sort of construct is not really necessary and were discussing this with our son.  This is a rather difficult conversation to have with a child that has only a very basic understanding of programming (and all through hands-on classes that aren’t necessarily teaching the terms).


After a number of attempts to explain what the difference between a strongly typed language is, and a weakly typed language is, an analogy popped into my head that I think was helpful.

As a young child we all played with shape sorters like this one:

shape sorter

I said that a toy like this is like a strongly typed language.  Think of each of the holes as a variable.  Those holes can only hold certain shaped objects.  If you try to put the wrong shape in there, it just won’t work.

A weakly typed language would have holes large enough that any of the shapes could fit.  Like this:


Unit Testing – Part 1

I’m fairly new to writing automated unit tests. I’ve read about it and I’ve tried it out and we even have a handful of java classes (10 to 20) that have full coverage from unit tests. But now it looks like we might be changing our view of testing and diving in head first.
So here are some things that I’ve learned that I didn’t really pick-up on when reading the books about JUnit.

Step 1 – Write Testable Code
Something that I missed when reading books about unit testing is that you have to change the way you write the code in the first place. This is generally a good thing, but something that is difficult to do across an entire development team.
Long methods – are very difficult to test – We have some methods that are over 1000 lines, it’s not easy (maybe not even possible) to test a method that is this big.

Dependency Injection – methods that create their own database connections, or create any new objects directly are very difficult to test well. For legacy code, I’ve found that a good way to handle this is updating the method to take these objects as parameters, and then creating a new method with the old signature that simply creates the objects and passes them in. This allows you to create mock objects for database connections, etc. which lets you test all the code in the method without needing a database to connect to.
public User buildUser() {
Connection conn = getConnection();
return buildUser(conn);
public User buildUser(Connection conn) {
... existing/old code to read user object from databse ...
return user;

References to Static Methods – Static methods are easy to test. Code that calls static methods are VERY hard to test, because you can’t mock/stub out the static methods. So you end up having to test all the code in the static method along with the code in the method you actually want to test. I don’t think small methods that take few or no parameters from well tested libraries are much of an issue. For example, if your code calls Math.random(), it’s still fairly easy to test. But if you have a lot of “utility” methods that take large objects as parameters or that do complicated logic it’s difficult to mock-up the test data to get those static methods to return all the different scenarios that allow you to run through all the code in the method under test.

If developers are writing code that is difficult to test, they will hate writing unit tests. It is time consuming, the tests will break often from small changes in the code, and code coverage will be low. They need to change the way they write the code in the first place before you can be successful in implementing unit tests.

In general, testable code is better quality code anyway. Having short methods that “Do one thing”, and creating methods that use Object Oriented Design (ie: not static) are easier to maintain, easier to read, and much nicer to work with. This is a side benefit of implementing unit tests.

I’ll continue with my lessons learned in later posts. Something I’m still not sure how we are going to handle is all the existing legacy code that doesn’t fall into the category of “testable” or at least not “easily testable”.