I avoid using null whenever it makes sense to do so. Sometimes avoiding null is easy, but sometimes it requires additional work. In both cases, though, I never looked back wishing that I’d used null instead.

There are many arguments on the Internet^TM^ explaining why you should not use null. Some of these arguments are pretty solid:

null is a type on its own and requires special attention

In computing, a null pointer or null reference is a value saved for indicating that the pointer or reference does not refer to a valid object.1 When you are returning a nullable type from a method or creating a nullable type on the database, you create a fork in the road: While one of the paths keeps you on the road, it will always be possible to take the other path and hit a dead-end.

null introduces tests on additional code paths

If you’d like to avoid runtime errors, you’ll want to check for null values whenever you get something from a method call. That could mean extra conditional statements just to check if the returned value is a valid object. Since you’ve introduced extra null checks, you should also add tests to cover the new code paths. If you have nested objects that are nullable on every level, you create 2^n^ code paths just to check for empty values.

Debugging null errors are more difficult

Fortunately, Ruby and many other languages have a safe navigation operator, right? You can go deep enough without checking for null at every level with the &. operator. However, the safe navigation operator is just syntactic sugar and it hides the real problem. Nullable types create uncertainty and that’s most visible when you try to debug. I bet you’ve seen this error if you’ve programmed in Ruby before:

NoMethodError: undefined method `*' for nil:NilClass

Debugging this error message is extra hard because it doesn’t give you any clues on where to start. You can’t say which object is invalid to begin with.


The arguments on why you should avoid null go on and on.

I’m not going to go that far to say “Null is Evil” but let’s hear from Sir Tony Hoare who thinks that introducing null references in the 60s was a billion dollar mistake:

Things have changed a bit. People are a little bit more worried about errors and they are moving to languages like Java in which subscript checking2 is standard […] And then I went and invented the null pointer. If you use a null pointer, you either have to check every reference, or you risk disaster.3

Ways to avoid null

There are many ways to avoid null which one of them being the famous Null object pattern. Having a valid object implemented with default behaviour can clean up those nasty empty checks. I like this approach a lot because it shows the true power of polymorphism. If you can treat all your objects the same way, without having to check object type then you are taking advantage of Polymorphism.4 The same approach can work for testing when you’d like a real object to test instead of just mocking. Null objects require more work than simply using null, but when they’re there, it’s easier to read and understand the behaviour.

Sometimes you don’t need to go as far as creating a completely new object. Some objects don’t require polymorphic behaviour or what you’re using is basically a string. I prefer using empty strings instead of null whenever appropriate (i.e., when the empty string does not denote an invalid state). Knowing that a method always returns a string relieve you from worrying about empty checks.

Another way I use to avoid null is to raise domain-based exceptions if I need to tag an invalid state. The disadvantage here is that you still need to catch these exceptions. But the behaviour is then properly documented via domain logic errors. You’d know what exactly has gone wrong and how to fix it.

As with everything, my favourite saying is: It depends. That’s why I don’t like marking anything as evil and try to completely avoid it. Instead, I find the practice of avoiding as a helpful exercise to work towards a better design.

If you have about 30 minutes to spare, I highly recommend Sandi Metz’s RailsConf talk in 2015: Nothing is Something. She’s brilliantly arguing why avoiding null is a good idea. You also get a crash course on why object-oriented design is so powerful and how to harness that power. She covers the true nature of object-oriented design with real-time examples of polymorphism, composition over inheritance, Null Object and Strategy patterns, dependency injection, etc.

  1. Wikimedia Foundation. (2023, January 27). Null pointer. Wikipedia. Retrieved March 20, 2023, from https://en.wikipedia.org/wiki/Null_pointer 

  2. I had a bit of a hard time understanding what subscript checking really is. Hoare was using this term instead of type checks, so I dug deeper. Eventually what I found was Bounds Checking. It is basically any method to detect whether a variable is within some bounds. I found one Hoare’s papers titled Subscript Optimisation and Subscript Checking which explains why subscript checking can be problematic:

    Subscript checking involves heavy loss of efficiency at run time, and often gives rise to serious object code expansion. And yet the omission of the check leads to severe risks; since the results of a subscript out of range are entirely unpredictable to the programmer, and they reveal themselves only much later in the execution of the program. These errors will cost a lot of time and money to detect and remove; and since they may be data-dependent, it is possible that they may lie dormant during program testing, only to create havoc in a production run.

  3. QCon. (2009). Null References: The Billion Dollar Mistake. InfoQ. Retrieved March 20, 2023, from https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare/. 

  4. Castello, J. (2020, January 2). Everything you need to know about nil. RubyGuides. Retrieved March 20, 2023, from https://www.rubyguides.com/2018/01/ruby-nil/