Story: “What is the Most Challenging in E2E Test Automation?”
Many software testers know the answer when they think deeply. However, in practice, they often neglect it and focus on trivial matters. Therefore, test automation failures.
This is one of the Stories series.
Table of Contents:
· The Story
· This is an Important Realization but often Neglected
· The Story of S Continues (in a few year's time)
The Story
In 2006, I worked in a large tech company (over 500 IT staff, considered large in my city) as a test automation engineer (contractor). One day, the newly-joined testing director, S, who is responsible for the overall testing process in the company, invited all testers (~70) to a meeting.
In the meeting, after the usual introduction (his and then self-intro from everyone), S asked what test automation technologies were used in different teams. There were many answers:
Micro Focus QTP
Selenium WebDriver
- Java
- Ruby
- C#
- JavaScript
- PythonSoapUI
Ranorex
WebDriverIO
S and everyone (including me) were shocked by the long list. Apparently, there was a mess and lacked no planning and direction. Also, from the testers’ expression, it seemed test automation attempts had failed in all teams, except mine.
S, I think, wanted to soften the atmosphere. He turned and wrote this question on the whiteboard, “What is the Most challenging in E2E Test Automation?”
S encouraged everyone to come up and write one on the whiteboard. So, many did.
Lack of Training
Not enough time
The test framework/tool is not reliable
The application does not provide IDs for every element
Test Data Issues
Not run well in Jenkins
Git branch policies
…
Soon, the whiteboard was almost full, there were at least 20 of them.
Before the meeting, I told my mentee (who recently turned from a manual tester to an automated tester) that we would be silent if possible. The reason: my recent rescue of a failed test automation had already attracted some attention. (For more, check out Case Study: Rescue Unreliable 20 hours of Automated Regression Testing in Jenkins ⇒ 6-Minute Highly-Reliable in BuildWise CT Server)
From my experience, nothing can be achieved in this kind of big gathering. I did not want to get involved in yet another test automation steering committee, which was usually just useless talking and debating. I prefer to remain low-profile and do hands-on real test automation.
The whiteboard was almost full. S asked, “any more?”, and looked at me. Apparently, he has heard about me or my work. I could no longer hide, so I got up and wrote “Test Maintenance” at the bottom right of the whiteboard.
Then, S asked everyone to vote on the issue by adding a tick on the issue they concurred with.
The №1 (with the most ticks) issue: “Test Maintenance” (by me).
This is an interesting finding. Please note that this was the last written item on the whiteboard. Of course, during the whole event, there were conversations among all the testers. But all (except my mentee and me) focused on the less-important issues.
By the way, the №2 issue was“Lack of Training”, I will write a separate article on this.
S sensed the awkwardness and quickly concluded the meeting by saying, “this is a good meeting, we will take the feedback into action in the coming months.” Of course, there were no follow-on actions (until I left the company a few months later).
This is an Important Realization but often Neglected
“Test Maintenance” is the primary effort of test automation, and shall be the base for any decisions. Sadly, this is often neglected in practice, like in this story. For example, many software companies over-emphasized test creation when choosing a test automation framework/tool. Many managers and tech leads fell in the sales pitches of bad and expensive test automation tools, such as QTP and Ranorex, claiming “How easy to create automated tests using record-n-playback or assistance of its Object Identification GUI Utility”. History proved them wrong. The code-based (and free) approach, such as Selenium WebDriver and Playwright, has dominated test automation since 2011.
Even within the code-based camp, the practice of focusing on the human readability of test steps is wrong, too, bad examples are:
Protractor.JS (deprecated)
adding simpler syntax on top of Selenium, but losing its consistency and intuitiveness.Cucumber and other Gherkin-styled test syntax frameworks
check out Why Gherkin (Cucumber, SpecFlow,…) Always Failed with UI Test Automation?
Also, since ChatGPT’s release 2022–12, there has been a hype ChatGPT for test automation. I reviewed ChatGPT (see my article, ChatGPT is Useless for Real Test Automation), it sucks for test automation, not enable to able to create working simple login tests. Anyway, AI Testing lovers might debate, “ChatGPT will get better, with training”. Yes, maybe. Please think of the “Test Mainainence” perspective. Just say, ChatGPT created a reasonable good suite of automated tests (after a long chat with you). Can you maintain them? The fact is, your application changes frequently.
The correct approach is to embrace Maintainable Automated Test Design and Functional Test Refactoring.
Keep reading with a 7-day free trial
Subscribe to The Agile Way to keep reading this post and get 7 days of free access to the full post archives.