An anti-pattern is a repeatable process that produces ineffective results. Patterns emerge because we, software craftspeople, see a standard solution to a common problem repeatedly. Thus we begin to apply such patterns whenever we see a similar situation. As a result, they help us reduce costs and save time. An anti-pattern is similar to a pattern in the sense that we tend to use a specific solution. However, when we try to apply this particular solution to a larger set of problems that may require a different solution, an anti-pattern emerges. It’s like seeing the law of the instrument in play: When the only tool you have is a hammer, you treat everything as if they were a nail.

Reaching a high-level perspective to see patterns and anti-patterns at play was crucial for me to design better systems. Lately, though, I started noticing a meta anti-pattern at play: An anti-pattern of anti-patterns. If a specific solution is considered as an anti-pattern, people start avoiding it altogether. Even when it’s appropriate to apply the pattern, people avoid using it. This thinking is wrong. A pattern emerges out of necessity to solve a problem. Before they were classified as anti-patterns, many patterns were born because they were helpful in a particular case.

Let’s consider a simple design pattern defined by the Gang of Four: Singleton. It allows you to define an object that knows no boundaries within your system. It’s a single unique object which is accessible from wherever you call it. We could think about configuration values or database connection objects as examples. However, singleton had become quite popular in time because a developer could bypass the system’s boundaries and take shortcuts to the entities they needed. Singleton pattern became an anti-pattern because we started using it to hack the system. It’s a good exercise to think about alternative solutions first—yet, for some instances, singleton objects can still be helpful.

Another example I can think of is inheritance—one of the three pillars of object-oriented design. Object-oriented programming introduced a significant paradigm shift, and all of its three pillars are pretty useful. Inheritance, however, became an anti-pattern in time because developers started using it as a code reuse tool. Reuse by inheritance led to multiple layers of unnecessary abstraction in systems, a notable driver of complexity. The principle of using composition over inheritance showed a more subtle way for code reuse in object-oriented systems. Yet, inheritance is still a helpful pillar when utilized correctly. Creating interfaces for different concretions is one of those use-cases. Inheriting from concretions instead of abstractions may not be the best approach for all cases. Still, when you follow the Liskov Substitution Principle, you can design better abstractions powered by inheritance.

I am skeptical whenever the choice becomes all in or out. Most decisions (if not all) we need to make in life present themselves on a scale. There are almost always trade-offs that require software craftspeople to think and assess based on the context. There are certainly no super solutions that can answer all of our problems. However, it’s also not wise to make a suboptimal decision just because other people had misused a solution that could lead to an optimal decision for your case.