Achievement Quantifier Round 3 – Testing Spectre.Console Commands using XUnit, AutoFixture, and an In-Memory SQLite Database

This post shares an experience of unit testing in C# using XUnit
, AutoFixture
, and Spectre.Console.Testing
libraries plus an in-memory SQLite database. The motivation for this post is to prepare a testing project setup for future reference.
The example project is Achievement Quantifier, first introduced in this post and later expanded upon in the subsequent post. It is a .NET CLI tool for tracking and managing achievements. It uses Spectre.Console
commands for the user interface and EF-Core with SQLite database for data persistence.
Access the project’s source code on GitHub 📁, check the initial state 0️⃣, explore the pull request ➡️, and view the final state 1️⃣.
Preparatory Steps
On Determining What to Test
The basic blocks of Achievement Quantifier project are Spectre.Console
commands, which are independent of each other and directly use EF-Core's data contexts for data manipulations. They are chosen as the units to be tested.
The chosen database testing strategy is SQLite in-memory. As such, each test creates a command and a SQLite in-memory database, invokes the command, then verifies the console output and the database state.
On Logging
Up to now, the app has relied on logging to display information about entities manipulations, like this:
info: AQ.Console.Commands.AchievementCommands.ListAchievements[0]
Found 1 achievements.
This is a misuse of logging, because logs are for developers, not users.
The first commit replaces all uses of ILogger<T>
with AnsiConsole.WriteLine
calls.
The same information is now printed to the console in this way:
Found 1 achievements.
On IAnsiConsole Interface
Tests typically run in parallel. This means that static classes, such as AnsiConsole
, are accessed by multiple tests at the same time. This leads to console output conflicts and failing tests. The solution is to use a separate TestConsole instance for each test.
The second commit prepares the commands for using TestConsole
by introducing a dependency on IAnsiConsole
in each command.
First Test
The third commit adds the unit test project and the first test:
public class ShowStatusTests : CommandTestsBase
{
[Fact]
public void TypicalCase()
{
// Arrange
IHostEnvironment hostEnvironment = new HostingEnvironment()
{
EnvironmentName = "Test",
};
ShowStatus command = new(hostEnvironment, Console);
// Act
int result = command.Execute(CommandContext, new EmptyCommandSettings());
// Assert
Assert.Equal(0, result);
Assert.Equal("The runtime environment is 'Test'\n", Console.Output);
}
}
The test follows Arrange/Act/Assert pattern. All other tests will follow the same pattern too because it's convenient and it follows best practices.
The ShowStatusTests
class inherits from the CommandTestsBase
class, which implements the common configuration for all tests that use Spectre.Console
commands:
public class CommandTestsBase
{
protected readonly IRemainingArguments RemainingArguments = new Mock<IRemainingArguments>().Object;
protected readonly CommandContext CommandContext;
protected readonly TestConsole Console = new();
protected CommandTestsBase()
{
CommandContext = new([], RemainingArguments, "", null);
}
}
In XUnit, testing fixture setup is handled in constructors, or just properties instantiations in simple cases. Because XUnit creates a new instance of the test class for each test, fixtures remain independent. Using inheritance helps with reusing fixture setups. The base class constructor is invoked first. The child class constructor is invoked second.
In this and all the following tests, the CommandContext
is actually just a stub. All command inputs are provided via the settings class.
Good-path Tests
"Good-path" is a scenario in which the app behaves as expected. All inputs are valid, and no exceptional conditions occur. This means that the app meets the basic functionality requirements. Other names for the same concept include "happy-path", "positive-path", and "typical case".
The fourth commit adds the good-path tests for achievement classes and achievements.
The tests share a few common characteristics.
The first is the shared configuration of the data context, managed in the DbTestsBase
class. Test classes inherit from this base class and use its CreateDataContext()
method to access the data context.
The second is the AutoFixture
library, which is relied upon for generating test data when the specifics don't matter. For example, instead of coming up with achievement class names such as NAME1
and NAME2
, the auto-generated names take form namea2b835e0-6e05-4bea-afeb-7743501848bb
.
However, certain cases require manual customisation. For example, as of version 4.18.1
, AutoFixture fails to generate DateOnly
types. Also, navigation properties in EF-Core entity classes have to be excluded. Here is an example of manual customisations:
public void Customize(IFixture fixture)
{
fixture.Register<AchievementClass>(() => new()
{
Name = fixture.Create<string>(),
Unit = fixture.Create<string>(),
});
fixture.Customize<DateOnly>(o => o.FromFactory((DateTime dt) => DateOnly.FromDateTime(dt)));
fixture.Register<Achievement>(() => new()
{
CompletedDate = fixture.Create<DateOnly>(),
Quantity = fixture.Create<int>(),
});
}
The third common characteristic is assertions. The tests typically assert three conditions: 1) the return code is 0, which indicates success in console apps 2) the database state is as expected 3) the console output is as expected.
Here is a typical good-path test:
[Theory, DefaultAutoData]
public async Task ShouldUpdate(AchievementClass achievementClass, string name, string unit)
{
// Arrange
_dataContext.AchievementClasses.Add(achievementClass);
await _dataContext.SaveChangesAsync();
UpdateAchievementClass.Settings settings = new()
{
Id = achievementClass.Id,
Name = name,
Unit = unit,
};
// Act
int result = await _command.ExecuteAsync(CommandContext, settings);
AchievementClass? updated = _dataContext
.AchievementClasses
.SingleOrDefault(a =>
a.Id == achievementClass.Id &&
a.Name == name &&
a.Unit == unit);
// Assert
Assert.Equal(0, result);
Assert.NotNull(updated);
Assert.Contains(updated.ToString().RemoveWhitespace(), Console.Output.RemoveWhitespace());
}
Bad-path Tests
Complementary to good-path tests are bad-path tests, which are those tests that explore error conditions and edge cases.
When the commands are tested, the input is provided via the setting classes. However, this approach bypasses settings validation, which complicates testing of missing required options and default values.
A potential solution is to move validation from the settings to the commands and make all setting properties nullable. Nullability allows distinguishing between an option that is not provided and one provided with a default value, such as an integer variable set to zero or a string variable holding an empty string.
The fifth commit implements this approach.
Here is a typical bad-path test:
[Theory, DefaultAutoData]
public async Task ShouldFailWhenQuantityIsNotProvided(DateOnly date)
{
// Arrange
UpdateAchievement.Settings settings = new()
{
Id = _achievement.Id,
Name = _achievement.AchievementClass.Name,
Date = date,
Quantity = null
};
Task Action() => _command.ExecuteAsync(CommandContext, settings);
// Assert
await Assert.ThrowsAsync<ArgumentException>(Action);
}
Result
With the addition of good-path and bad-path tests, the Achievement Quantifier project is now thoroughly tested and is set for more iterations and improvements.