Working Automated Test ≠ Good Reliable Test
A case study to show the common mistakes in poor-quality and unreliable automated test scripts. Develop a working automated UI test is not enough.
My teenage daughter Courtney applied for a casual programmer position during this Uni break. Before the interview (via Zoom), I suggested: “You still have a few minutes, so why not write an automated test against the company’s website? You might be able to show them something relevant if they asked about your test automation skills”. She did.
While watching her write the test, I was pleased that she did quite well for most of the part, though needs improvements in a few areas. I found it is actually a good case study to illustrate a key concept in test automation: a working test ≠ a good test.
Anyone (6-year old+) can produce a simple working automated test when using a recorder. It might work for that time, but will it work reliably under these conditions?
different running conditions, for example, on another machine with a different screen resolution
a new build, which might change some dynamically-generated IDs
UI changes that affect the layout of a web page
being run many times
The ability to create a highly-reliable automated test script at the creation time separates a good SET (rare) from mediocre ones.
The test my daughter chose to write is “submitting inquiry form”, a very common one.
1. Find a more reliable locator to avoid working only on this version
The first operation is to enter the first name, its HTML as below:
Courtney, like me, was aware of test recorders but rarely used them. A test recorder might generate working but bad test statements:
driver.find_element(:id, "firstname-95290f03-f944-4891-86ac-955c7ff63917").send_keys("Wise")
The reason is simple, on the next build, its ID might change. Courtney selected the name
locator, the most reliable locator in this case.
driver.find_element(:name, "firstname").send_keys("Wise")
Much better.
A note on efficiency here. She entered test scripts efficiently using Snippet in TestWise (you can find similar features in Code IDEs).
If you are interested in how Courtney develops automated tests (since she was 12 years old), read her article: “Set up, Develop Automated UI tests and Run them in a CT server on your First day at work”.
2. Use dynamic data to avoid working once
The next two operations are to enter ‘Last Name’ and ‘Email’. Inexperienced testers will fill in with some simple test data.
driver.find_element(:name, "lastname").send_keys("Tester")
driver.find_element(:name, "email").send_keys("a@b.com")
It will work. But if the server has validation on uniqueness, it will fail on the next execution onwards.
A better way is to use dynamic data, such as Faker.
driver.find_element(:name, "lastname").send_keys(Faker::Name.last_name)
driver.find_element(:name, "email").send_keys(Faker::Internet.email)
3. Intermittent failure: “unable to click the ‘Submit’ button”
The driver.find_element(:xpath,"//input[@value='submit']").click
failed on her test. When she debugged the test step, sometimes it passed.
Courtney was confused for a while, and I assumed that she should have known it better. With the time constraint (for her upcoming interview), I gave her a hint.
The reason: the position of the ‘Submit’ button is right on the edge of the browser’s bottom border, Selenium might consider it ‘not clickable’.
Note: For most cases, Selenium handles clicking visible elements fine, but there are edge cases, like this one. I never encountered this when testing my own apps developed in Ruby on Rails, but occasionally for other apps. I think it is to do with the way using JavaScript and CSS.
A simple verification is to scroll the browser window and then rerun the click_submit_button step, using the ‘Run Selected test step against current browser’ feature in TestWise (you can see a demo video in this article: “Attach Selenium Test Steps Against An Existing Browser”).
4. Maximize the window — works, but not good
After understanding the cause, Courtney quickly came up with a solution: maximizing the browser window.
driver.manage().window().maximize
sleep 0.5
driver.find_element(:xpath, "//input[@value='Submit']").click
It worked but was not ideal. What happens when the tests run on a build agent machine with a smaller screen resolution?
I often see test scripts maximize browsers by default, which is often unnecessary.
5. Scroll the browser (fixed) — works, but still not perfect
After hearing my question (the above), Courtney realized to use scrolling.
driver.execute_script("window.scrollTo(0, 500);")
sleep 0.5
driver.find_element(:xpath, "//input[@value='Submit']").click
She knew the fixed scrolling (to 500) was no good and planned to refine it after the interview.
6. Scroll the browser (relative) — good
She optimized the test script after the interview.
elem_submit = driver.find_element(:xpath, "//input[@value='Submit']")
elem_sumbit_y = elem_submit.location.y
driver.execute_script("window.scroll(0, #{elem_sumbit_y - 100 })")
sleep 0.5
elem_submit.click
Now the test execution is very reliable!
Note: the below did not work for this website.
driver.execute_script("arguments[0].scrollIntoView(true);", elem);
As you can see, for a very simple test within only 4 user operations on one webpage, there can be several versions of working but less-than-perfect test scripts.
load File.dirname(__FILE__) + "/../test_helper.rb"
require "faker"
describe "Demo" do
include TestHelper
before(:all) do
@driver = Selenium::WebDriver.for(browser_type, browser_options)
driver.manage().window().resize_to(1280, 720)
driver.get(site_url)
end
after(:all) do
driver.quit unless debugging?
end
it "Contact with relative scrolling" do
visit("/contact")
driver.find_element(:name, "firstname").send_keys("Wise")
driver.find_element(:name, "lastname").send_keys(Faker::Name.last_name)
driver.find_element(:name, "email").send_keys(Faker::Internet.email)
elem_submit = driver.find_element(:xpath, "//input[@value='Submit']")
elem_sumbit_y = elem_submit.location.y
driver.execute_script("window.scroll(0, #{elem_sumbit_y - 100 })")
sleep 0.5
elem_submit.click
end
end
Imagine throwing 50 not-reliable automated tests to a CI Server to do “CI/CD”, what will be the odds of getting a green build (passing all tests)? That’s why most software projects are scored Level 0 or 1 on AgileWay Continuous Testing Grading.
Furthermore, the above test is reliable in terms of execution, but not easy to maintain. So, this test script is not yet done; Courtney needs to refactor it to enhance its readability and maintainability. Please read my other article: “Maintainable Automated Test Design” and the upcoming “Functional Test Refactoring”, or my book “Practical Web Test Automation”.
Related reading: