In my last devblog post, I introduced the concept of Unintentional Programming (UP). While written as a tongue-in-cheek parody of Intentional Programming, it’s a very real and frequently-encountered phenomenon. Obviously, no good software engineer intends to practice UP — yet somehow, we still sometimes manage to write code that looks like the result of UP. How does that happen, and what can we do to prevent it?
To answer those questions, I started by looking for trends in both “intentional” and unintentional” code I’ve seen. (I use “intentional” to describe code which makes obvious both the problem it tries to solve and the way it tries to solve it.) Here are some of my most noteworthy observations:
- Otherwise bad code may still be highly intentional and thus easy to predictably fix and mentally verify.
- Of all possible types of bad code, unintentional code is the absolute worst as it defies maintainability itself.Unintentional code often results from the compounded accumulation of minimal-change fixes (or misguided spot fixes) to a poorly-understood or already-overly-complicated area.
- Unintentional code often results from an incomplete or incorrect root cause analysis.
- Intentional code tends to naturally fall out from clearly and accurately understanding the problem.In an event-driven desktop application, one extremely common manifestation of UP is needlessly repeated identical executions of a potentially-expensive event handler.Unintentional code often results from a temporary scaffolding or workaround accidentally becoming permanent.
- Every developer tends to assume their intentions will be obvious to any other developer; it’s just a common pitfall of human psychology that nobody is immune to.
- More liberal usage of asserts could have surfaced many instances of unintentional code earlier, but reliance upon them isn’t enough, since the developer has to anticipate what to assert against, and unintentional code by definition works for reasons unexpected by the developer.Unintentional code slips through quality gates and into released software because it seemed to be working, so nobody ever thought to question what it was actually doing or how it was arriving at its results.
- Testing (of any kind, including manual) can never surface all unintentional code, because even the most creative tests only verify that the product’s results are correct; testing cannot explain how the product arrived at a correct result.
Knowing these things, we can begin to dream up ways to prevent Unintentional Programming. Here are some things we could try:
- Engineering teams need to aggressively “shine a light” on “spooky” code areas that (almost) nobody understands. Allocate time and people to efforts such as refactoring and adding true unit tests. Do group readings/refactorings of spooky code so engineers can leverage each other’s insights and ideas to come to an understanding about how the code works and massage it into a more self-evident form.
- Apply the “5 Whys” method of root-cause analysis to all proposed code changes, especially bug fixes, to ensure we’re always changing and fixing the right things for the right reasons.
- Require at least one other engineer to build and step through any proposed change before calling it good. It’s also a good idea to explicitly name the individual who performed that step in the change description, so that anyone looking through change history has one more person they know they can ask about that change.
- If code makes an assumption, it either needs to gracefully handle any violation of that assumption, or assert that the assumption is true. Asserts help catch problems before release.
- Engineers should be stepping through the code in the debugger for common/core usage scenarios as part of every release’s manual test pass. This will add confidence that the code is still working the way it’s expected to work under the hood, in a way that can’t necessarily be caught by unit tests, integration tests, UI automation tests, or dogfooding.