Writing parameterised unit tests in C#
How to take your unit testing to the next level
Unit tests are awesome.
Being able to write code that tests a piece of your code quickly and reliably is a massive game-changer.
It’s a good skill to know how to appropriately test your own code — knowing what to test and what to skip over; knowing how many different scenarios to check for; knowing what should be an isolated test and what should be a far-reaching integration test — there are a lot of different considerations to make.
One such scenario is how to write a range of tests that only differ in the input data. That’s what we’ll look at here.
Imagine you have written a helper method that takes a string, and inserts a space before each capital letter (e.g., converts ‘myClassName’ into ‘my class name’). Your method might look something like this:
If we wanted to write some tests for this, we might start off with testing that it can handle a basic string that should be split into 2 words, and make sure the happy path is working as expected. So, using xUnit, we might write this:
And when we run the test, we see it show up in our Test Explorer (if we’re using Visual Studio 2022):
Excellent. Obviously, the test is failing because we haven’t actually put any logic in there, but that’s ok — we’re doing Test Driven Development (TDD) here so it’s fine to write tests first, then write our code afterwards to make the tests all pass.
Now, say we want to test what happens when we’ve got a string that should be split into 3 words. Let’s add a new test with this different input:
When we run this, we see both our tests listed:
Great. We’re now comfortable that the main happy paths are covered, next we need to cover some edge cases. What happens if the input has no capital letters and so shouldn’t be split? What happens if the input is null, or is whitespace? What happens if the input is entirely upper-case? We can write tests for each of these. Let’s have a look at our entire test class now:
Can you spot the code smell there?
Each of these tests are identical, apart from the input and expected output values. Tests like these will obviously work fine, but over time they will become a headache to maintain — for example, if the signature of the SplitWords method changes, you’re going to have to update all of these tests individually.
The solution? Parameterised tests.
By abstracting the inputs and expected outputs out from the body of the test, you can define the behaviour in just one test, and execute it multiple times.
We’ll look at the 3 main testing frameworks (xUnit, NUnit and MSTest) in this article, and we’ll start with xUnit.
xUnit
This is how we would update this test to be parameterised in xUnit:
This is a parameterised test, with just one set of inputs. The main differences are:
- Instead of a [Fact] attribute, we’ve got a [Theory] attribute
- We’re got a new [InlineData] attribute with some data in there
- We’ve now got parameters in the test method itself
- The test uses these parameters instead of hard-coding the input/expected outputs
Let’s now add the rest of the test cases to this test:
Now, all our test cases are neatly collected together. In our Test Explorer, we can see this test, and have the ability to expand it to see the results of each individual test case:
This technique is extremely powerful — you can very quickly write up a number of tests to verify your code with a large range of inputs, making sure all aspects of the code are covered. The test is also very maintainable, since it’s only defined once. And it’s very easy to add and remove test cases in the future, when your requirements change.
Now that we’ve seen how this works in xUnit, let’s see how it works in other test frameworks.
(Spoiler: it’s almost exactly the same)
NUnit
If you’re using NUnit, the tests work in a very similar way. Take a basic NUnit test:
We can parameterise it like this:
Note that the [Test] attribute has been replaced with the individual [TestCase] attributes with the inputs/expected outputs.
MSTest
Finally, if you’re using MSTest (v2 onwards), the steps are also very similar. We can write our original test like this:
And to parameterise it, we have:
See that we’ve replaced the [TestMethod] attribute with [DataTestMethod], and added each test case as a [DataRow] attribute.
Summary
It’s very easy to write parameterised tests, which is a great way of grouping multiple test cases together, to avoid duplication of code within your tests.
We can do it using xUnit, NUnit and MSTest, and probably any other half-decent test framework that you might be using.
The options you’ve got with these kind of tests are vast, and I’d encourage you to take a look at the official documentation of each to find out what sort of things you can do.
But even at its most basic level, writing parameterised tests like this can enable you to quickly and efficiently cover all angles of your code, allowing you to make sure even the strangest edge-cases are considered.
Epilogue
It didn’t feel like this article could finish without seeing all those tests pass. That just wouldn’t be right, would it?