This article deviates from my usual topic: end-to-end test automation, on unit/integration testing, i.e. programmer’s domain. I am also a programmer, in fact, won an international programming award. After switching my day work from Programming to Automated Testing in 2010, I have solely developed and maintained (thanks to end-to-end via UI test suites) several highly acclaimed apps, in my spare time.
The idea of “Good software engineers write unit and integration tests” was well accepted over a decade ago. However, most programmers don’t know how to write good unit/integration tests. They spend a considerable amount of time (often much more than coding) on unit/integration testing, to avoid being labelled “not a good programmer”.
“Most Automated Tests Suck” — James Shore, author of The Art of Agile Development.
However, those poor-quality tests usually turn out to be a maintenance nightmare. One main technical aspect is using Mocks or Stubs wrongly. They should use Fakes instead. This article will share my experience on this (since 2007).
What are Mocks, Stubs, and Fakes?
Marin Fowler explained the differences a long time ago (2007). You can read the excerpt below. However, I recommend skipping that.
From Martin Fowler’s famous “Mocks Aren’t Stubs” article:
Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).
Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test.
Mocks are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive.
Let’s read on and focus on the correct approach, rather than debating the terms.
How come Mocks and Stubs are not good for integration testing?
I first learned Mocks (jMock for Java code) back in 2005. I was excited about it, Test Driven Development (TDD) was a hot term then. One big challenge for TDD is to test integrations. Mocking provides a solution.
Here is the jMock expectation structure (other mock libraries, such as EasyMock, work in a similar way).
invocation-count (mock-object).method(argument-constraints);
inSequence(sequence-name);
when(state-machine.is(state-name));
will(action);
then(state-machine.is(new-state-name));
A code example.
m.checking(new Expectations(){{
allowing(serverSettings).getServerUUID(); will(returnValue("server uuid"));
allowing(authStrategies).get(with(UnauthorizedAccessStrategy.ID)); will(returnValue(myAuthStrategy));
ScheduledExecutorService ses = new ScheduledThreadPoolExecutor(1);
allowing(executorServices).getNormalExecutorService(); will(returnValue(ses));
}});
However, with my writing more tests, I found mocking gradually became a huge burden, especially with more team members started practising TDD.
“Well-designed code is easier to write unit tests. Anyone can write unit/integration tests, but not anyone should. Simply, those programmers don’t qualify yet. “ — Zhimin Zhan
For a simple code change, we often needed to spend 5X or more updating the integration tests. It was really painful. We could not simply delete those integration tests because they did provide value. However, gradually, everyone realized “deleting most integration tests” was inevitable. This “death march” feeling sucks! Eventually, we did, but carrying the guilty thought of “we are not good software engineers”. At the same time, we were wondering:
How do top software engineers do this (integration testing)?
We were regarded as senior Java contractors, but no one really witnessed real successful integration testing. Could this TDD be an unachievable hype?
I later found the answer and the correct approach.
Confirmation of “Mocks and Stubs are not good”
Before showing you my approach (back in 2009), I will borrow some recent quotes (from authority) to confirm the mocks and stubs are generally bad for integration testing. (using them in a handful of tests for the demo does not count). The killer is test maintenance.
“The pendulum at Google has now begun swinging in the other direction, with many engineers avoiding mocking frameworks in favor of writing more realistic tests.” — “𝗦𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗮𝘁 𝗚𝗼𝗼𝗴𝗹𝗲” book
Yes, I realized this a long time before some Google engineers, not because I was smart, but because I knew Ruby (since 2006) and had a good testing mindset.
My Solution for Integration Testing: using Fakes
Many people don’t get Fakes (see Martin’s definition). Let me illustrate with an example, a real story.
Once, I worked on a large government project. This department (A) needed to retrieve various data (in SOAP, but actually XML over HTTP) from department B. Testing (mostly within development among our programmers) became a major issue.
Seeing no hope, I started writing a Ruby on Rails app to mimic the behaviour of B’s API services. I told this to one close friend in the team, and he thought it was a crazy idea: “You are trying to implement another system for testing?” The short answer is “Yes”. But what he did not know was I used Ruby on Rails. (other programmers in the team only knew C#, and some knew Java)
Ruby on Rails is very easy to use, and can get something up very quickly (took me a few hours). More importantly, Ruby is a great language for text-parsing and text manipulation. Even I was impressed by the progress (at that time, my Ruby coding was not that good).
After getting the first major B’s API implemented, I showed it to a business analyst. She was deeply impressed, as I could control the behaviour of B (this is fake B)’s return.
def rg011(user_id)
if (user_id == "duplicate")
return duplication_error_xml
elsif (user_id == "not_exists")
return not_exists_xml
else
return valid_xml # not static, with some dyanmic info
end
end
She called the project manager, who was excited too. The PM wanted us to keep it low-profile for now (the tech lead, a C# programmer, did not like anything not from Microsoft).
Note: I met a few senior engineers at software testing conferences (as speakers). I found those Microsoft engineers were quite nice and open-minded. However, a fair percentage of .NET developers and staff from “Microsoft Gold Partners” quite often are narrow-minded and lack skills.
Implementing all of B’s major APIs did not take long. Ruby on Rails has a backend database, in which I can code business logic to offer various flexibility. The PM introduced this to our own pod (a section of a big team), and we named it “Ranki”.
As our pod used the term “Ranki” a lot, others took notice. At first, the tech lead was unhappy and came to talk to me about this. I directed them to my manager (as I was instructed to do). Anyway, the tech lead’s pod did not use Ranki, but other teams did.
That’s not the end of the story. A couple of years later, I joined department B as a tester (see my article series: Why I Switched my day work from Programming to Automated Testing?). My colleagues were impressed with my work on the first day. The reason: I created another version of these APIs before.
What I implemented was a Fake (in the context of testing technology. Fake is positive in this context, different from the ‘fake’ in ‘fake agile coach’). Ranki was an independent server, listening on a port, with a web interface to view and update data. But at that time, I was not aware of the “Fake” term, just did what I thought was the right thing to do.
I have done similar Fake systems like Ranki. I remember one tech lead contacted me to come back and help re-start a fake service. Why? My fake service dramatically reduced the time to do a complete end-to-end workflow.
Interacting with a real external system: 30 minutes.
Interacting with a fake system implemented in Ruby on Rails: 5 minutes.
Why don’t most software teams implement Fakes?
Keep reading with a 7-day free trial
Subscribe to AgileWay’s Test Automation & Continuous Testing Blog to keep reading this post and get 7 days of free access to the full post archives.