Skip to main content
blog title image

10 minute read - WebDriver Tips Test Automation

Because sometimes it's hard... some tips for "Working with WebDriver"

Jun 30, 2020

I can see why people find it hard to work with WebDriver, particularly people new to the tool. Over the years I have learned to expect almost constant change from the ecosystem within which I automate with WebDriver (Browsers, Drivers, WebDriver, Java, JUnit). After all, the ongoing betterment of web automating worldwide must continue.

Stuff Changes, Get Used to It

  • The API changes.
  • Sometimes it works in this browser sometimes it doesn’t,
  • sometimes a browser upgrade breaks Selenium.
  • Sometimes the version of Selenium has bugs
  • sometimes WebDriver changes expose bugs in our use of WebDriver, or our assumptions in our use of WebDriver.
  • etc. etc.

And throughout all of that, I have used it for automating production applications, without pulling my hair out too much.

  • Make sure to run your test code regularly.
  • Make your tests reliable so that if they fail, it is because something changed.
  • Fix failing tests quickly to keep the build passing.

Check you are using the most recent version of the drivers

Browsers self update very regularly.

Sometimes this means we need to update the drivers.

Check that your drivers are up to date.

Assume out of date published web documentation

Learn to read the source code.

Click through to the actual source to see the comments in the code for the methods.

Assume all documentation, and blog posts, are out of date to some extent. Trust the code.

Get your tests working cross browser, but don’t mandate everything

  • Get your tests working in multiple browsers
  • Create the ability to switch off some tests for certain browsers

I don’t expect all tests to run on all browsers.

  • Sometimes the synchronisation is different for some browsers and is hard to put in place.
  • Sometimes the execution approach we use doesn’t work on all browsers.

Sometimes browsers, or drivers, have bugs, so when we update to the new version something that was working, stops working.

People spend far too much time trying to make everything work on all browsers all the time.

It can be useful to run on different browsers, but don’t expect everything to run on all browsers all the time.

I primarily want to test functionality, if I have specific functionality that I can’t check cross browser with Selenium then I’ll do it manually. After all I automate a subset of functionality.

Learn JavaScript

Learn JavaScript and then you can use the JavascriptExecutor to provide a rich source of workarounds and augmentations.

A vital part of working with Selenium involves workarounds. Never assume that one perfect way of doing anything exists. Do it quick and dirty if you need to. Selenium or Driver updates, or browser updates, have a habit of making my workarounds redundant on the next update.

For Tips on how to learn JavaScript read this blog post

Don’t try and automate everything

“But I really want to check that the div is displayed when Ctrl+Z are pressed down, and that it goes when I key up”.

Great, then find another way to do it, or add a new library to your toolset.

We do not have to automate every single condition.

Automate to cover the items that are:

  • at most risk due to changes
  • on the technical ’edge’ and might be impacted by future browser changes
  • easy to pass in ‘data’ to increase data scope across a smaller functional area
  • etc.

We make decisions about what to automate, we do not have to automate everything. The Risk/Reward around Information/Misinformation/Maintenance.

  • Information - test passed
  • Misinformation - test intermittent
  • Maintenance - if we have to update the test so often that it takes as much time as running it, then… don’t automate it.

If you find a problem, check the forums and bug reports

It may not just happen to you. Other people find problems and report them. Sometimes you will find workarounds in the forum thread. You don’t want to spend hours chasing through your own code when you might have stumbled over a known problem, but one that only impacts a small set of fortunate people like yourself.

If there is an error message on screen then make sure you search for the ’exact’ error message shown on screen.

Don’t settle for the first ‘answer’.

Remember to search on stackoverflow if it doesn’t appear in your search, for some reason.

Create an experiment to isolate the problem

If it seems like some people have the problem, but others don’t. Then create the simplest experiment that will demonstrate the problem.

e.g. if a test is failing, can you write a new @Test which contains the minimal setup and action steps which triggers the issue.

This can help you find a workaround. The fastest short term work around I know of involves excluding it from one browser, and running it on another.

Try an alternate approach. Sometimes the first time we write an @Test implementation we pick something that looks sensible, but later, with more experience and understanding of the application under test, we see that our approach was unnecessarily complex.

Amend your experiment to:

  • Try an alternative
  • Simplify the approach e.g. do you actually need all those navigation steps, or could you just ‘get’ the page?

If it is very hard to automate through the GUI, then consider if you can isolate the functionality and results and test a different way.

e.g.

  • automate using an HTTP call
    • possibly ‘stealing’ an authentication cookie from a WebDriver Test
  • add to the Unit Test Automated Execution
  • can you mitigate most of the risk through an API call?
  • perhaps you should periodically check it via human power?

Check your Synchronisation

Many (possibly most) of the problems I encounter in my and other people’s test code relate to synchronisation. Learn to use the WebDriverWait rather than rely on implicit waits.

Try your test in debug mode, if it works there, but fails in the build, then investigate it as some sort of synchronisation problem.

Try running the test in a loop and see if you can get it to fail, this can help diagnosis.

JavascriptExecutor can help with synchronisation, if you learn to use it, then you’ll see the possibilities when you need it.

Many different types of synchronisation problems exist - I include parallel running of tests which interfere with each other, as a synchronisation problem.

We have an online course dedicated to Synchronisation

Look for a bug in your test, or the app under test

It is tempting to immediately blame our tools.

Don’t stop investigating and fix the blame on WebDriver.

People find it easy to do that, and then miss problems of their own making.

If your organisation seems to have a unique problem then assume the fault rests with you and investigate fully.

Consider:

  • what has changed recently?
  • look through the logs of previous builds, were there any hints that a problem was present?
  • double check the test code against a manual interaction with the application

Warnings are not Errors

Often we see what looks like error messages in the Driver output logs, but if they have WARNING on the front, they are not error messages we should worry about.

Otherwise known as: FAQ - Why is my test passing, but the driver throwing errors or exceptions in the logs?

The drivers are often noisy, i.e. they write a lot to the logs. At first glance it might look like an error is being reported, it isn’t really.

Although the logging says “pipe error” it is reported as a Warning:

WARNING: pipe error:

It is the “WARNING:” that you want to pay attention to, if that said “ERROR:” then you would be seeing errors.

Generally, if your code executes without an exception being thrown you are fine. If you experience an exception then have a look at the logs to see what is reported.

Example Case Study - Tests not working on mac

I had code for windows. I upgrade the version of WebDriver on my Mac, and run the tests. And some of my tests are failing.

I assume due to Mac (cross-platform) incompatibilities.

But I’ve made a cardinal sin.

I haven’t followed my own advice for upgrade processes.

Namely:

  • Make sure the tests are working and stable, before upgrading the environmental elements i.e. - browsers, and driver version

Fortunately it doesn’t take me too long to figure out that, I should really update the version of WebDriver on my windows box (where all the tests are running fine) first.

I do that, and I get the same issue.

It wasn’t a mac compatibility issue, it was my use of the new version of WebDriver.

Example Case Study - Browser Differences

I was using KEYS.ENTER to complete input fields.

But my tests started failing after browser updates.

I isolate the issue to a tiny test to experiment with.

I start to look for alternative keys and approaches.

I discover I now I need to use KEYS.RETURN for this site.

Before, it was an arbitrary choice for me, which one I used, but now, only KEYS.RETURN does what I want.

For some reason, that I didn’t investigate.

To ‘fix it’ I put this in an abstraction layer i.e. submitForm to isolate it.

Sometimes with WebDriver, I’m more focused on what I try to do with it, and I pick the ‘first thing that works’ but that might not be the ‘best’ long term approach.

If I find isolated issues like this, I abstract the fix to make it easier to amend in the future e.g. it didn’t really matter how I submitted the form, I just wanted the form submitted so I abstract it away. Then if the form stops submitting later, I change the submission approach.

Example Case Study - Cross Platform Locators

All my tests were working fine on Windows. But a few were failing on Mac. A click on an element that worked fine on Windows, failed on the Mac. How odd.

After a little investigation I discovered that Windows was allowing me to be lazier than the Mac.

I had been finding and clicking on “#filter li”

But I really should have been finding and clicking on “#filter li a”

i.e. the child Anchor, not the enclosing ListItem.

So even though I thought “a click on the element worked find on Windows, but failed on the Mac”, the truth was, that I hadn’t been clicking on the correct element in the first place.

It seems as though the click event was propagating down the DOM to the Anchor tag on Windows, but not the Mac.

Lessons Learned:

  • Be as specific as possible in your locators so that you are selecting the actual element you want the event to reach.
  • Cross-platform execution often exposes errors in our automation code: particularly our location strategies and synchronisation strategies

Example Case Study - Actions Not Working … browser and driver out of sync

For the life of me, I could not get User Interactions to work properly. They just wouldn’t work. They used to work, last time I tried them. But now…

new Actions(driver).keyDown(Keys.CONTROL).
                            sendKeys("b").
                            keyUp(Keys.CONTROL).
                            perform();

Nope. Not Working.

I closed the automation down, performed some web searches. And I found I wasn’t alone on the forums via web searches.

Then I tried again and… on my machine, with the way I was starting the browsers, with my version of Selenium and the versions of the browsers I installed…

I got results I didn’t expect…

It worked.

Wait a minute. Did you see that? The logging messages in WebDriver were slightly different.

I had the most recent version of WebDriver, but an out of date version of the Browser on my machine. The browser and driver were out of sync.

Lessons Learned:

  • When moving between environments, check that automation works before amending anything
  • This helps isolate environment issues.
  • Don’t start updating and adding new tests if they are failing on one environment, but not on another. Get the environments working first.