Another Milestone in Web Test Automation: Over 600 End-to-End Selenium Tests for my WhenWise App
Another solid example proving that real E2E (UI) test automation is achievable.
2025-07-11, WhenWise’s End-to-End (UI) test suite surpassed 600 user-story level test cases. These tests serve as regression testing, and of course, they are valid and pass on a daily basis (not strictly every working day, but on a day when changes are made).
Some readers might remember my other article, “Showcase a 500+ End-to-End (via UI) Test Suite: E2E Test Automation is Surely Feasible for Large/Complex Apps”. Oh well, the number reached 600.
This is yet another piece of evidence against the common cowardly excuse in E2E test automation: 'End-to-end tests are flaky, so just write a few.'

Managing a large number of E2E (UI) automated tests is purely a matter of capability. Don’t hide behind weak excuses. Yes, E2E tests are more prone to changes and flakiness compared to white-box unit tests. That’s been a well-known fact for decades. Repeating it as if it’s a revelation—especially while claiming to be an ‘expert’ in test automation or Agile—is misleading.
Some readers might doubt the number and wonder, “Are those real E2E (UI) tests, not API ones?” Yes, check out the video of a test execution on a BuildWise Agent.
Yes, real E2E testing through the UI becomes challenging at scale—but isn’t that exactly the kind of problem software engineers in testing are meant to solve? And if they can’t, shouldn’t they be seeking help and learning how to?
The 605-test-count is not my personal record. Back in 2021, another one of my side apps, ClinicWise, reached 611 automated tests.
Some readers may wonder, “Were these automated E2E (via UI) tests useful?”
Definitely and absolutely! It helps identify regression errors almost every non-trivial change I made to the code ( new features, change requests, bug-fixes, refactorings). Without these automated E2E UI tests, all my side apps have been doomed!
Long-time readers know that I support my statement with facts. Here it is.
Before the green run (passing all tests on the BuildWise CT Server, for the WhenWise test suite, that means each of every 30000+ Selenium test steps pass) on 07-11, I had a series of failed runs. Yes, defects were detected. That is normal, for me, an international award-winning software developer.
Then I analyze and fix the issue, trigger another run on the BuildWise CT Server, and repeat this cycle a few times until reaching a green. Then push the build to production.
Some might say, 'That sounds like a lot of work.' Yes and no. It does require effort and commitment, but it ultimately saves a great deal of time and helps prevent the disaster of major defects being discovered by customers.
Yes, Robert C. Martin is right on this. Those E2E (Selenium and Appium) tests are massive time and money savers for all my side projects (TestWise, SiteWise, ClinicWise, BuildWise, and WhenWise).
Related reading: