- #1
SlurrerOfSpeech
- 141
- 11
I remember a while back someone told me that there exist programmers who don't believe in null. I thought that was a crazy idea. Until recently, as I've gotten better at OOP and have figure out that if my object has a property that is allowed to be set to null, it usually means that I am not OOPing correctly.
Example:
Let's say I have an object like
I would argue that this is bad design. Someone using the object will have to write code such as
In other words, someone using it has to know or assume rules like
This leads to more elegant and failproof code like
As always, there are tradeoffs. The way I should is not micro-optimal because the runtime environment has to navigate the inheritance chain.
Do you agree with most of what I wrote above? Why or why not?
Example:
Let's say I have an object like
Code:
public class JobTracker
{
public DateTime Started { get; set; }
public JobStatus Status { get; set; }
public DateTime? Ended { get; set; }
public string ErrorMessage { get; set; }
}
I would argue that this is bad design. Someone using the object will have to write code such as
Code:
if (tracker.Status == Status.Failed)
{
Console.WriteLine(tracker.ErrorMessage);
}
In other words, someone using it has to know or assume rules like
- Status is Failed => There is error message and end date
- Status is Succeeded => There is no error message; there is end date
- Status is InProgress => There is no error message or end date
- There is end date => Status is Succeeded or Failed
- There is no end date => Status is InProgress and there is no error message
- Etcetera
Code:
public interface IJobTracker
{
DateTime Started { get; }
JobStatus Status { get; }
}
public interface IFinishedJob : IJobTracker
{
DateTime Ended { get; }
}
public interface IFailedJob : IFinishedJob
{
string ErrorMessage { get; }
}
This leads to more elegant and failproof code like
Code:
if (tracker is IFailedJob)
{
Console.WriteLine((tracker as IFailedJob).ErrorMessage);
}
As always, there are tradeoffs. The way I should is not micro-optimal because the runtime environment has to navigate the inheritance chain.
Do you agree with most of what I wrote above? Why or why not?