Test Creation Only Account for ~10% of Web Test Automation Efforts
This will help explain many doubts and misconception about Test Automation.
When IT Professionals talk about End to End Test Automation, there are two completely opposite views: Very Easy or Extremely Hard. However, most will agree privately that they have never witnessed real test automation success, not even once in their whole careers. This article will clarify.
For simplicity, I will explain using the most common form of automated software testing: web test automation.
Table of Contents:
· Is End-to-End Test (via UI) Automation is easy or hard?
∘ View 1: End-to-End Test Automation is Very Easy
∘ View 2: End-to-End Test Automation is Extremely Hard
· In theory, Web Test Automation should be easy , but …
· Why Do Most End-to-End (via UI) Test Automation Fail?
∘ 1. Test Creation (~10%)
∘ 2. Test Stabilization and Refinement (~30%)
∘ 3. Ongoing Maintenance (~60%)
· Summary
Is End-to-End (via UI) Test Automation is easy or hard?
There are two completely extreme answers.
View 1: End-to-End Test Automation is Very Easy
1. Salesmen of test automation tool vendors often say it is “Easy” (with their tools)
Typical selling points of expensive test automation tools are: “Record-n-Playback” and “Object Identification Utility”.
Some readers would say, all these kind of tools were outdated, and have been replaced by programming-language-based test automation frameworks, such as Selenium-WebDriver and Playwright. That’s true. However, the damage has been done, in many people’s minds, they associate test automation to record-n-playback.
“Record/playback testing tools should be clearly labeled as ‘training wheels’.”
“Thinking of renaming the IDE to Selenium Trainer (a record-and-playback utility).”
“What to do with the Selenium IDE, no self respecting developer will use it.”
- Jason Huggins, creator of Selenium (v1) at AAFTT Workshop 2009
Here, I want to point out, in sales demos, creating (prepared) automated test scripts does look very easy. I will elaborate on that shortly. For now, accept the fact, there have been marketing people selling “test automation is easy”.
2. Senior Software Engineers often think writing automated test scripts is an easy job.
In the first episode of the brilliant Silicon Valley TV show, two brogrammers were teasing Richard, a QA engineer, about his pet programming project, Piped Piper. “a tester develops an app?!”
The reality: Richard is at least 50X better than these two in programming.
Have you seen programmers presenting their ‘new test automation frameworks”? I have, for a number of times, and each of them was a complete failure. They thought they knew a bit about test automation and thought it was easy.
View 2: End-to-End Test Automation is Extremely Hard
1. Michael Feathers, the renowned agile expert and the author of ‘Working Effectively with Legacy Code” book, wrote this story on his company’s blog (in 2009).
It happens over and over again. I visit a team and I ask about their testing situation. We talk about unit tests, exploratory testing, the works. Then, I ask about automated end-to-end testing and they point at a machine in the corner. That poor machine has an installation of some highly-priced per seat testing tool (or an open source one, it doesn’t matter), and the chair in front of it is empty. We walk over, sweep the dust away from the keyboard, and load up the tool. Then, we glance through their set of test scripts and try to run them. The system falls over a couple of times and then they give me that sheepish grin and say “we tried.” I say, “don’t worry, everyone does.”
2. Google VP thinks great testers are “Gold”.
“In my experience, great developers do not always make great testers, but great testers (who also have strong design skills) can make great developers. It’s a mindset and a passion. … They are gold”.
- Google VP Patrick Copeland, in an interview (2010)
3. Alan Page, the first author of “How We Test Software at Microsoft” book
“For 95% of all software applications, automating the GUI is a waste of time. For the record, I typed 99% above first, then chickened out. I may change my mind again.” — Alan Page’s Blog (2008)
“95% of the time, 95% of test engineers will write bad GUI automation just because it’s a very difficult thing to do correctly”.
- this interview from Microsoft Test Guru Alan Page (2015)
4. Gerald Weinberg, a software legend (of an early generation)
“Testing is harder than developing. If you want to have good testing you need to put your best people in testing.”
- Gerald Weinberg, in a podcast (2018)
5. Robert C. Martin, co-author of the Agile Manifesto:
“Automated testing through the GUI is intuitive, seductive, and almost always wrong!”
- his blog (in 2009)
In theory, Web Test Automation should be easy, but …
So, there are completely contradictory views. Which one is correct?
In theory, web test automation shall be easy, for the very simple reasons below:
Web technologies, such as HTML and CSS (defined in W3C standards), are largely unchanged for over 20 years.
In other words, automated testers are facing a very static target. As you know, this is very rare in software development. Programmers, think of the language and framework you have used. For example, the hugely popular Angular.js was deprecated.The way (framework, tool, and practice) of testing applies to all websites.
Regardless of what coding language is behind it.
This means, if one automated tester has ever done one good web test automation, for once, they could replicate the success in any job for 20+ years.
Let me elaborate a bit further. An experienced automated tester, Toby, might have worked on dozens of projects in his past 19 years. He has met and worked with many other software engineers and test automation engineers. If anyone of them (should be hundreds) was a real test automation engineer and Toby learned. Toby could do a reasonably good job in his current role (20th year). Yet, Toby failed completely. That’s the current reality.
To put it simply, 99+% (see the above figure used by Alan Page, the testing guru at Microsoft, i.e. we are probably talking 99% failure among that Microsoft level of engineers, which shall be higher than the industry average) so-called “Test Automation engineers” are fake. Their work, unstable automated test scripts, provide no or little value to the software teams.
Some readers may wonder why software teams hire fake automated testers? Because real ones are so rare, managers most likely have yet to meet one. It is all a blind-guess game. Above all, many managers don’t care, they need some kind of ‘test automation’ to justify “Agile”.
Why Do Most End-to-End (via UI) Test Automation Fail?
Besides various technical reasons, management doesn’t understand that test creation is only a very small portion of the overall test automation efforts. Fundamentally, test automation efforts fall into three categories (or phases in order):
1. Test Creation, ~10% of total efforts
2. Test Stabilization and Refinement, ~30%
3. Ongoing maintenance, ~60%
Please note, this is just my estimation based on a typical well-designed end-to-end test suite by manual scripting (using best practices such as Page Object Models). For recorded test scripts, your ability to do stabilization is very limited. Ongoing maintenance, as the “Agile Testing” book pointed out (see the quote below), will be impossible or too hard. This explains all those record/playback are either dead or dying.
If you have more tests or more frequent releases, the ratio of “ongoing maintenance” will be even higher.
Unfortunately, most managers and tech leads mainly see the “Test Creation” effort.
“focusing on test creation” is wrong (for simple and obvious reasons, which I will outline shortly) but very logical to managers/tech leads. Back in the waterfall days, an activity was mostly planned to be done once, and mostly forgotten thereafter. This is not much different from how fake agile coaches / fake scrum masters treat user stories (moving one under the “Done” column) in so-called “agile” projects.
For test automation, once a new automated test script is developed and working, this just means merely ~10%. There is a lot more work ahead which many don’t foresee.
Because of this narrow view (see the above image), managers often fall into the sales pitches of test automation vendors. They think: “This tool can create automated test scripts quickly, using record/playback.” What they did not know led to a maintenance disaster.
Next, I will explain these three phases over the life of one automated test script.
1. Test Creation (~10%)
Record/playback is the wrong way to create automated test scripts, which I think most people will tend to agree with now.
Manual scripting is the best and can be highly efficient, too. More often than not, I developed working automated tests (with much better quality, but leave this out for discussion for now) much quicker than my colleagues using record-n-playback tools. I have done some of these comparisons a few times.
People may doubt, how come? To clear your doubts, I need to answer two aspects:
Manual Scripts can be highly efficient.
In movies, you see the good hackers mostly typing rather than point-and-click, right? Check out this tutorial, a write-down of a training session (~15 mins) I conducted for two IT laymen.
Using TestWise’s Attach test execution to the existing browser feature, scripting can be highly efficient.Record-n-Playback usually don’t work well
For start, record/playback does not work well for dynamic apps (such as AJAX). You often need to perform extra proprietary operations.
Furthermore, if you want to enter a dynamic date, such as yesterday’s date, the recorded script (if done 100% correctly) would be invalid. That’s why all record-n-playback tools offer scripting customization, in languages such as VB-Script (QTP).
2. Test Stabilization and Refinement (~30%)
Many modern apps use dynamic GUIDs (very common in Microsoft camp), which means, the recorded test scripts only work for this build. This is much worse than working once (which will be detected quickly). On the next build, many of these test scripts will be invalid.
In this second phase, we need to make sure the test scripts work reliably in different running conditions. There are many possible reasons that contribute to unreliable test scripts:
Prone to web application structure changes
Using GUID is a simple case. There are many more subtle cases, e.g. You want to click the link withNext
, if later a programmer adds anotherNext
link before this section, the test script is no longer valid.
Keep reading with a 7-day free trial
Subscribe to AgileWay’s Test Automation & Continuous Testing Blog to keep reading this post and get 7 days of free access to the full post archives.