Is this a Mock?
When unit testing, we occasionally find the need to replace some of the production code to drive the tests.
Let’s explore in more details what replacements we may use.
Overview
Gerard Meszaros has explored unit tests in his xUnit Test Patterns book in great detail.
In regard to code replacement when writing tests, he attempted to formalize the various names used and define their usage.
Uncle Bob wrote a nice blog in which he summarized the whole topic in an elegant way.
Naming
We can say that a Mock is a kind of spy, a spy is a kind of stub, and a stub is a kind of dummy. But a fake isn’t a kind of any of them.
Quoted from Uncle Bob blog.
Gerard named these test objects Test Doubles.
- Dummy
- Stub
- Spy
- Mock
- Fake
Dummy
// Production
type Displayer interface {
Display(fmt string, args ...any) error
}
type Computer struct {
monitor Displayer
}
func NewComputer(monitor Displayer) *Computer {
return &Computer {
monitor: monitor,
}
}
func (c *Computer) Runtime() time.Duration {
// Returns the time the computer is running.
return 0
}
// Test Dummy
type DummyMonitor struct {}
func (_ DummyLog) Display(fmt string, args ...any) { return nil }
// Test
func testNewComputerIsNotRunning() {
computer := NewComputer(DummyMonitor{})
assert(computer.Runtime(), IsZero())
}
The dummy monitor is passed because the function signature requires it. The test itself is not expected to use it.
A Dummy
does nothing. It acts as a placeholder.
If its methods return something, it will be usually the default value
(e.g. nil
, 0
).
Moreover, if someone does try to use it in the production code, it should probably blow up.
Stub
// Production
func (c *Computer) Start() error {
if err := c.monitor.Display("Starting the computer"); err != nil {
return err
}
// Do the needed to start a computer.
return nil
}
// Test Stub
type StubMonitorDisplayFails struct {
err error
}
func (s StubMonitorDisplayFails) Display(fmt string, args ...any) {
return s.err
}
// Test
func testComputerIsStatringWithDisplayFailure() {
monitor := StubMonitorDisplayFails{errors.New("an error")}
computer := NewComputer(monitor)
assert(computer.Start(), hasError("an error"))
}
The stub returns or sets a specific value, allowing us to check the tested unit in that condition.
The example above defines a stub with a specific error and expects
to see the error returning when calling Start()
.
Spy
// Same production code as before
// Test Spy
type SpyMonitor struct {
displayData string
}
func (s *SpyMonitor) Display(fmt string, args ...any) {
s.displayData = fmt.Sprintf(fmt, args...)
}
// Test
func testComputerIsStatring() {
monitor := SpyMonitor{}
computer := NewComputer(monitor)
assert(computer.Start(), noError())
assert(monitor.displayData, isEqual("Starting the computer"))
}
Spy record calls to it. It can record how many times it was called or with which arguments.
But overusing them is dangerous, it makes tests fragile and dependent on implementation of the production code (and not on their API).
Mock
// Same production code as before
// Test Stub
type MockMonitor struct {
displayData string
expectedDisplayData string
}
func (m *MockMonitor) Display(fmt string, args ...any) {
m.displayData = fmt.Sprintf(fmt, args...)
}
func (m *MockLog) Assert() {
assert(m.displayData, isEqual(m.expectedDisplayData))
}
// Test
func testComputerIsStatring() {
monitor := MockDisplay{expectedDisplayData: "Started the computer"}
computer := NewComputer(monitor)
assert(computer.Start(), noError())
monitor.Assert()
}
A mock is set up in advance with the required data, including the details of what it expects at the behavior level.
It verifies and perform the assertion by itself.
Mocks have become popular because tooling have been built to generate them for a given interface automatically.
Its disadvantage is similar to spies. It requires the setup stage and mixes expectations with assertions and when used with autogenerated tooling, it usually does a lot more than it is needed.
Fake
Fakes are different from the test doubles discussed so far.
Unlike the others so far, a Fake includes business logic, mimicking and simulating the logic of the real object.
// Production
type Server interface {
Setup(args ...any)
Run() error
Status() string
}
func (c *Computer) RunService(service Server, priority int, timeout time.Duration) error {
service.Setup(priority, timeout)
if err := service.Run(); err != nil {
return err
}
// retry 10 times to verify that the service is running
for i := 0, i < 10, i++ {
if status := service.Status(); status == "running" {
return nil
}
time.Sleep(1 * time.Milisecond)
}
return errors.New("unable to run service, timeout reached")
}
To test the above production code we need a double that:
- Implements all the methods of the
Server
interface. - Has logic to timeout after
timeout
, including in the middle of the retry.
Because we identify the need for logic which mimics a (real) service
,
a Fake may fit here.
If the service
is replaced in many other scenarios with much more
complex logic, the owner fo the service
concrete may provide and
maintain a Fake.
Test authors can then use such a Flake to test the code which depends on it:
- Without implementing it themselves.
- With assurance that its logic is updated with the production code. As such, the production code author can see something failed in code that depends on the logic.
// Using a Fake
import servicefake "ehaas.io/superhero/service/fake"
func testComputerThatRunsServiceSuccessfully() {
computer := NewComputer(stubMonitor{})
assert(computer.Start(), noError())
priority, timeout := 1, 30 * time.Milisecond
assert(computer.RunService(servicefake.NewService(), priority, timeout), noError())
}
if you end up using Fakes a lot, then it may be a sign that the tested code is not structured in an optimal manner for testing.
Summary
We have listed test doubles and showed what functionality & capability it adds for test authors.
Using the appropriate test double helps to compose quality-code, which is elegant and clean.
It is important to understand that when we use spies and mocks, we may add a level of dependency on the behavior, which couples the tests to the production code they cover. Over-coupling the tests to the production code has a cost of breaking them more often. Therefore, loosing a degree of confidence in them to protect the codebase.