One question the always comes to my mind in the discussion about code coverage is: What does the percentage refer to, anyway? This is probably a stupid question. It seems to be common sense that it is the whole codebase. But does it really have to be?
What is the value of code coverage, again? I like how Martin Fowler puts it in his article on this topic:
Well it helps you find which bits of your code aren’t being tested.
Finding these bits is easier when there are only few of them. And with that I mean: few in the coverage report. Your codebase may have a lot of untested bits and that’s okay. There are things you shouldn’t test, anyway1. To get rid of the noise, you just have to follow a simple process:
For every untested bit, you have to decide: Is this testworthy? If it is, write tests (or at least acknowledge that you should write tests for it). If not, exclude it from the coverage analysis.
For some people this is cheating, but I’d rather call it focusing. I don’t need to be reminded of code that I’ll never write tests for. Working this way allows you to reach 100% without writing tests that provide no real value. But even if you don’t, it leaves you with a better picture since you actively decided on whether something should be tested or not.
Personally, I do not see much value in testing stuff like getters/setters, generated code or simple data mapping. You, your project and/or team may have other standards and that’s fine. ↩︎