Blindly Following Best Practices Not A Best Practice

This is intended as a "my thoughts on this topic" blog, not as an exhaustive scientific essay on software development and human nature. This is a rant post. Take it or leave it and share your thoughts in the comments!

I believe that as a software developer it is my calling to write good software. Not just software. Good software. In the same way that the medical profession's calling is to promote their patient's health, and the law profession's calling is to defend the interests of their client, so I think should the programmer's calling require a high standard of quality, maintainability, and forward thinking in the software that they write.

I find the perception which has pervaded the software engineering industry that programmers are slobby, slothful hackers who cut corners to "make it work" both offensive and inaccurate. I strive to work beyond this perception.

But part of acting as professional is to keep the pendulum from moving too far in the other direction. (Or, perhaps instead, to not do things that "feel" professional but are not really.)

In an effort to write good software, the industry has come up with many "best practices" that are held up as a standard to work towards. Unit testing is a best practice. Continuous integration is a best practice. Test driven development is a best practice. Refactoring is a best practice. Agile development is a best practice.

And they probably are... in most cases.

I am not against any of these practices. I agree with all the ones I listed (except maybe test driven development... but that is for another blog) and I have practiced, am practicing, or am working towards practicing all of them.

The problem that concerns me is that best practices are taken for granted. In fact, I often hear the term "best practice" as a blanket justification or a way of ending an argument.

"Why do we do that? Well... It's a best practice."

"I understand your concerns, but we need to do this because it is a best practice..."

"No, but you don't understand... This is a best practice."

That bothers me.

Why? Because when you blindly follow a practice just because it is a "best practice" you have stopped thinking critically about what you are doing. And when you stop thinking, you are less likely to write good code.

As I wrote this I realized that I started a lot of sentences with "I think", "in my opinion", and "my belief"... I do not pretend to be an expert in the topics of human psychology and its application to software development. These are just a collection of thoughts and observations that I have forged over time in my personal experience.

I am always open to counter arguments (and, in fact, that is a core part of the point I am making... Never cling to dogma at the expense of critical thinking.) I am not trying to persuade you of a certain position. My only goal is to inspire you to take a look at why you do what you do and evaluate if it still makes sense in your context.

Principals versus Practices

I think there is a very helpful distinction to be made between "best practices" and "best principals". A principal is abstract. It is applied to circumstances on a case by case basis. A principal is something like "encapsulation" or "keep it simple" or "do not repeat yourself". It is a concept that can be manifested in many ways and is understood to have a positive benefit.

A practice is an application of a principal to a situation. A practice is something like "make your methods 50 lines or less", "use getters/setters to manipulate object state", or "call classes through interfaces". The practice is a rule of thumb that implements the principal.

The problem is that best practices are just that, rules of thumb. Like all rules of thumb, they do not always apply. But as developers (or even as managers) we be so fixated on the "we must follow best practices" mantra that they miss the nuance of a specific situation.

I was in a code review (not a best practice in my opinion...) where a class was reviewed much like the following:

class Coordinates {  
    public final int x;
    public final int y;
    public Coordinates(int x, int y) {
        this.x = x;
        this.y = y;
    }
}

NOTE: Names and context changed to protect the innocent.

You will notice that the x and y members are public final and that there is no getter method. The programmer was dinged in the review and told to use a getter instead.

He proceeded to make what I thought was a compelling argument for why a getter method was both unnecessary and counter productive. These are primitive types. They are immutable. The code that used this class used those members extensively. Adding a getter method would dramatically swell the code and make it harder to read / visually parse. He even argued that there was a (slight admittedly) performance loss due to the method call overhead.

The response?

"Using getter methods is a best practice. Change it."

I find that distressing. Yes, using getter methods is a good rule of thumb. It insulates your code from internal changes to a class and promotes encapsulation. But in this case the developer under review made a strong counter argument based on the context and other principals (such as readability).

That is a problem with the "best practices" mindset. Ignoring edge cases or even just some critical thinking about a given context, in favor of applying a best practice.

I think that in software development (and actually in all of life) you should focus on understanding and applying principals and not following scripted applications of those principals. Understand encapsulation. What it is. Why it is important. How to apply it. But also understand the tradeoffs! Do not become myopic in apply one principal (encapsulation in the example above) at the expense of other principals (clarity and readability).

Everything Has Cost vs Benefit

An underlying issue that I think is not well addressed in the software development industry as it stands is that everything has a cost. Bad things have a cost (I love the concept of technical debt as a metaphor for explaining this...) But good things have a cost too!

Many best practices require hard work to apply. Writing unit tests takes a lot of developer time. Continuous Integration takes time to setup and effort to maintain. Refactoring is hard, and its even harder to convince customers that it is important thing to do sometimes.

We can get so caught up in extolling the virtues of a particular best practice ("Unit tests give you the courage to refactor!", "Continuous Integration let's us deploy to Production in minutes instead of hours!") that we understate or mis-evaluate the costs of the practice.

In most cases, like the ones listed above, we would argue that the benefit outweighs the cost and then dismiss it from our minds. But I think it is important to have that argument (at least in your head) each and every time you make a commitment to changing your software development process. Keep the cost/benefit analysis always on the forefront of your mind and do not take it for granted.

One reason is that we can also easily miss different kinds of cost. Cost is not just time and effort. Cost can be the effort on project culture or software architecture.

I will pick a controversial example... Unit Testing.

Unit Testing has a lot of benefits. It captures knowledge about your code (such as expected behavior and past bugs) and constantly applies that knowledge to your code. It allows you to make changes with less fear of introducing bugs. It saves a lot of time and eliminates some manual work required to test code.

Unit Testing has a direct cost in terms of time and effort. It often takes as least as much time to write the tests as it does the code. Frameworks and libraries help mitigate this, but it still is a cost.

If that was all, I would argue that, in almost every case, unit testing's benefits outweigh the cost.

But there is another problem... Unit Testing increases the cost of refactoring. I think it is impossible to write tests in such a way that they are un-impacted by rearchitecting your code. (Maybe, unless your code is already very very well architected, in which case, you probably do not need to refactor!)

My observation is that unit tests contribute resistance to refactoring, because they further increase the time/effort to refactor. My opinion is that refactoring is essential to the health of software, so this cost is not insignificant.

As a personal example, I started working on a project that, while generally well architected, was in need of significant refactoring. (In the end, we reduced the code base from 250,000+ lines of Java to 38,000 lines of Java while adding several major pieces of functionality... That's a lot of refactoring!)

The project already had a fleet of unit tests associated with it, but, since so much refactoring was called for, we actually abandoned the unit tests completely. After about a year of refactoring with the code base now 20% of what it was, the code base is much more stable. We are making far fewer sweeping changes, and are now prioritizing reintroducing unit tests.

The cost and benefit was weighed. In the short to mid term we prioritized refactoring over unit testing. Now that the major refactoring is winding down, unit testing is getting re-prioritized. That seems reasonable to me.

Some people are appalled and we disposed of the unit tests. Why? "Unit testing is a best practice!"

Whatever.

Rules of Thumb and Breaking The Rules

NOTE: Not to be confused with rules against breaking your thumb...

I want to write a whole blog about Process / Rules of Thumb and the Human Mind, but this blog probably is not that. Just wanted to close with some observations about levels of expertise and how they apply to the issue of best practices.

NOTE: I am indebted to this book for introducing me to these concepts and particularly the Dreyfuss model which I discuss below. A highly recommend read!

The Dreyfuss Model of Skill Acquisition is a psychological framework proposed in the 80s by two brothers (Stuart and Hubert Dreyfuss) that models how human learn and improve at skills. I will defer a lengthier discussion to a future blog post, but the key point that I to derive from it is how rules (or best practices for our discussion) affect the performance of people at different expertise levels.

The Dreyfuss model proposes five levels of expertise: Novice, Advanced Novice, Competent, Proficient, and Expert. At earlier levels of proficiency, the practitioner (or software developer in our case) benefits from following recipes or procedures. (The common analogy is to cooking. As you learn to cook, you rely on recipes. For a novice cook, cooking from a recipe gives a better result than just winging it.)

However as the practitioner increases in proficiency, recipes and procedures start to hinder their results. Things like intuition and subconscious instincts play a larger role and being forced to adhere to strict procedures decreases the quality of the result. (Again, which cooking, this is like requiring a chef to rigorous follow a recipe. Deprived of his ability to make judgment calls, tweaks, or add flair, he gives a worse result than possible with improvisation.)

My belief is that this principal applies with equal force to best practices. A best practice is like a recipe. It makes sense most of the time. For a fledgling programmer, relying on best practices is likely to improve the quality of their output. But as a programmer transition to that of an expert, blindly following best practices decrease the quality of the output. An expert has the judgment to know where the rules of thumb are inadequate and when to break the rules.

That concludes my thoughts on that subject! Agree or disagree, I hope that you will make an effort to examine the practices you have in place and see if they make sense to you!

Questions? Comments? Email me at: [email protected]!