Appium Desktop Test Automation Milestone: 60,000 test executions over 2 years for 300-test regression suite
Free and open-source Appium/WinAppDriver is a feasible solution to Test Automation for desktop apps.
With today’s regression run (for my Desktop App: TestWise) in the BuildWise CT server, I reached two milestones:
The test case count reached 300.
The total test executions exceeded 60,000 times over 2 years.
Some might think: “300-test does not sound like a lot.” It really depends on the type of tests. 300 unit tests or API tests might not be much, but 300 automated End-to-End UI Tests (Level 3 of the AgileWay Continuous Testing Grading) is a big deal. To give you a perspective, I run all 300 tests in a CT process which is triggered when a code change is made. I have full confidence in this regression suite; if it passes, a production release will follow.
What does my Desktop Test Automation look like?
Here is a screencast of an Appium/WinAppDriver test that drives TestWise (a testing IDE) to create a new Selenium WebDriver test and run it.
The Appium/WinAppDriver test was written and run in TestWise, hence I call it: “TestWise tests TestWise”.
What role does desktop test automation play for my app?
Once I’ve got a green build (passing all regression tests) in the BuildWise CT server, I will make a public release of a new TestWise version. Today I released version v6.6.6.
At AgileWay (my company), we have never used (and probably never will) a defect tracking system. If there is a defect reported, we will replicate it into an (or update existing) automated test, then add it to the regression test suite.
My Desktop Test Automation Journey
I started practising Web Test Automation (Watir then Selenium WebDriver) in 2005. At conferences, audiences asked me questions about Desktop Automation occasionally. My answer is: “Test Automation for desktop apps is much harder than web apps. Success in web test automation is rare, let alone desktop apps.”
As a matter of fact, I have been researching and trying desktop test automation since 2006, with some degrees of success.
Record-n-playback testing tools such as HP QTP do not count. Their proprietary test script syntax are poor in quality and inflexible to use. More importantly, it is almost impossible to maintain a large suite. These kinds of testing tools (often quite expensive), in my opinion, shall not exist.
The first automation library I used was AutoIT3 which was not easy to use. Therefore, I created a wrapper library rFormSpec which was initially used for a day job, and later for my own TestWise.
However, I was not totally satisfied with AutoIT approach in terms of reliability and flexibility. Because of that, the release frequency is much longer for TestWise than my other web apps. I have been searching for a viable solution for desktop test automation.
In 2019, I saw this on MSDN: Microsoft deprecated Coded UI Test and recommended Appium with WinAppDriver for testing desktop and UWP Apps.
The proof of concept (using Appium + WinAppDriver) took me some time, as you would know, compared to web test automation, testing desktop apps is far more challenging. Also, there were few resources available for WinAppDriver testing as it was new. In the end, I was convinced that this is the right approach. So I converted rFormSpec/AutoIT3 tests to Appium/WinAppDriver.
Desktop Test Automation vs Web Test Automation
Compared to test automation for desktop apps, web test automation is easier. The reason is simple: all web apps are the same, i.e. HTML (a standard), running in one desktop app: Browser (e.g. Chrome).
Here I will do a quick comparison. Compared to Selenium WebDriver, Appium/WinDriver is
harder to identify an element in UI
For web apps, just right-click on a page element and select Inspect, no extra tool is required.
With WinAppDriver, a separate recorder utility such as UI Recorder is mandatory.Slower Execution
Locating elements and Entering text is noticeably slower.Less Reliable
For example, below is one error I got this morning: “WinAppDriver crashed during startup”.
This is understandable, as WinAppDriver is still relatively new (the first release was in late 2017 and the current version is v1.2.1) while Selenium WebDriver is 10 years old.
My ToolSet
The syntax framework and toolset I use for testing desktop and web apps are actually the same: AgileWay Test Automation Formula.
Automation Framework: Selenium WebDriver or Appium (raw)
Scripting Language: Ruby
Test Syntax Framework: RSpec
Source Control: Git
Testing Tool: TestWise
Continuous Testing Server: BuildWise, Parallel Testing with BuildWise Agents
There is no cost for frameworks and the CT server. Tools (testing IDE and build agents) come with a free-use mode.
A sample test script
I follow the Maintainable Automated Test Design for both Appium and Selenium tests.
load File.dirname(__FILE__) + "/../test_helper.rb"
describe "Create a new Selenium RSpec project" do
include TestHelper
before(:all) do
close_all_testwise
@driver = Appium::Driver.new(testwise_caps, true).start_driver
sleep 0.5 # add a sleep for app window to show up
@main_window = MainWindow.new(@driver, testwise_version + " - AgileWay")
end
before(:each) do
@main_window.send_keys([:control, :shift, "n"])
sleep 0.25
end
after(:each) do
sleep 0.1
@main_window.send_keys([:control, :shift, "x"]) # close project
end after(:all) do
sleep 0.1
driver.quit unless debugging?
end
it "New Selenium WebDriver RSpec project" do
new_project_dir = tmp_dir("t1")
FileUtils.rm_rf(new_project_dir) if Dir.exists?(new_project_dir)
FileUtils.mkdir_p(new_project_dir)
puts("create a new project in this dir: #{new_project_dir}")
# Page Object Model
new_project_dialog = NewProjectDialog.new(driver)
new_project_dialog.enter_title("T1")
new_project_dialog.enter_location(tmp_dir("t1"))
new_project_dialog.clear_url
new_project_dialog.enter_url("file:///{{TESTWISE_HOME}}/gui-tests/testdata/site/index.html")
new_project_dialog.click_ok test_helper_rb = File.join(new_project_dir, "test_helper.rb")
abstract_page_rb = File.join(new_project_dir, "pages", "abstract_page.rb")
new_spec = File.join(new_project_dir, "spec", "new_spec.rb")
expect(File.size(test_helper_rb)).to be > 100
expect(File.size(abstract_page_rb)).to be > 100
expect(File.size(new_spec)).to be > 100 # Helper function
open_file("new_spec")
main_window.click_tool_run_test_file sleep 10
run_results = ""
try_for(8, 2) {
run_results = get_current_run_panel_csv
puts "Run Results => " + run_results.inspect
expect(run_results).to include("new_spec.rb")
} expect(run_results).to include("Test Case Name")
expect(run_results).to include("OK")
end
end