When we find a bug in a system we have to make sure we can isolate it and also reduce the risk that our approach caused the bug, rather than the system having a bug.
When I was creating a demo for my CounterString Chrome Extension I used Github because the Chrome extension source is on Github.
I noticed that a CounterString of 100, when used as a search term was reported as being greater than 128 characters.
Risk
What if the tool I was using was the source of the problem?
Do you know what risks the tools that you use add to your process?
The Chrome Extension doesn’t actually type the text into the field, it changes the value
attribute. This is equivalent to a user amending the DOM to add a value to an input field.
But this isn’t something that a normal user will do.
“No User would ever do that!”
Actually, I frequently do that. To bypass validation, to bypass front end bugs, to copy and paste into fields that block copy and paste.
A user might copy and paste into a field, but this extension does not copy and paste into the field. I writes the CounterString to the developer console to support copy and pasting, but the extension is simple, with minimal permissions so doesn’t copy and paste.
I do have a CounterString tool that does that and PerlClip does that.
And this tool does not press the keys that a user would to enter data, so it bypasses some JavaScript events that a normal user would trigger. (this CounterString tool can do that)
So this tool, introduces some risk into my testing. Has that risk, caused this problem?
Investigation
To investigate. I took the generated counterstring.
Checked that the length is correct using the console:
"*3*5*7*9*12*15*".length
And copy and pasted it into the field.
The search still failed.
Having recreated the issue through a copy and paste I am more comfortable that my tool is not the cause of the problem.
Is it the “*”
I wanted to see if perhaps the “*” in the CounterString was part of the problem.
So I took the generated counterstring and tried it with “-”
-3-5-7-9-12-15-18-21-24-27-30-33-36-39-42-45-48-51-54-57-60-63-66-69-72-75-78-81-84-87-90-93-96-100-
“We could not perform this search The search is longer than 128 characters.”
I tried with spaces
" 3 5 7 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 100 "
This time the search was accepted.
Is it because the start and end spaces are trimmed?
" 3 5 7 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 100 "
When I try with an additional two characters to compensate for the trailing and end spaces, the search is performed
" 3 5 7 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 100 –"
Trying with spaces up to the max, performs the search
" 3 5 7 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 100 104 108 112 116 120 124 128 "
The next search with a single leading space is reported as too long (it is 129 characters):
" 3 5 7 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 100 104 108 112 116 120 124 128 -"
The next search with the trailing space and ‘-’ reversed
" 3 5 7 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 100 104 108 112 116 120 124 128- "
is accepted as less than 128 characters, even though it is 129, and after some experimentation I can see that no matter how many spaces I put on the trailing end of the string it is accepted. I suspect that trailing spaces are trimmed off.
I added a lot of leading and trailing spaces to the above and it was accepted, so I suspect leading and trailing spaces are trimmed.
To further try and test for leading spaces I used:
" 3 5 7 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 100 104 108 112 116 120 124 128 -"
The above is 129 characters including the leading space
When I remove the leading space to get 128 characters:
“3 5 7 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 100 104 108 112 116 120 124 128 -”
It is rejects as being longer than 128 characters.
I admit to being confused at this point.
I could collate a range of test data which triggers the issue:
Actual size 128, reported as > 128
“3 5 7 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 100 104 108 112 116 120 124 128–”
“3 5 7 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 100 104 108 112 116 120 124 128-.”
“3 5 7 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 100 104 108 112 116 120 124 128*.”
Actual size 128, reported as <= 128
“3 5 7 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87 90 93 96 100 104 108 112 116 120 124 128..”
At this point I believe there is an issue, but I don’t see much point from a testing perspective looking deeper because it will probably be obvious from the code. We have to make decisions during testing as to how far do we have to go. This seemed far enough from the GUI as I was heading into rabbit holes and special cases that I think code would clarify.
Is it the GUI?
I then moved on to checking this from an HTTP perspective through a proxy.
This would isolate the problem to the backend rather than the GUI.
And when I put in the string values URL encoded then it can be replicated purely through HTTP messages.
I believe this is a backend processing issue.
How does this help?
By pushing the testing down to the HTTP level it makes it much easier to replicate through an automated fashion e.g. I could create a cURL command that replicates the defect, which would make it easier for anyone to replicate.
It also means that when we come to write code to automatically replicate this and assert on the fixed behaviour, we know that we do not need to do this at the GUI, we can do this at an HTTP level.
NOTE: not an API level, an HTTP level
Too many people get hung up on:
- we need an API to avoid automating the GUI
- we have to automate the GUI to assert on the fix because we don’t have an API
Automate at the injection and manipulation points in the application, if you have a web GUI and a server, you can automate via HTTP if you want to test the backend. There is a risk that your HTTP requests may diverge from the actual requests made by the GUI over time. An API can reduce that risk, but so can having code that automates the GUI to trigger some requests that you then compare with the requests you have modelled at the HTTP level to assert that the assumption in your HTTP model still match the GUI.
Summary
I think:
- this identifies a defect
- I demonstrated the tool I was using was not a risk to my process
- I isolated the defect to the backend server
- I made it easier to automate asserting on the fix
Difference from the video
The video of the live session where I investigated this is slightly different.
When I record the videos I tend not to write as many notes and keep track of the data I’m using, and on some applications I really need to do that.
This application in particular has such a nuanced interpretation of the data that the differences are subtle, and when I don’t write them down I lose track.
If I was pairing rather than recording then my pair might be able to help remember, but I prefer to rely on my notes during exploration, rather than my memory.
When I test for real, I make a lot of notes.
When I record the videos, to make the video smoother, I don’t do that. Which impacts my testing.
It might be worth watching the video so that you can see where I lose track because I haven’t kept notes, it’s a good example of how important note taking can be.
The Video
In this video I investigate a defect on Github that I found when creating a demo video for my CounterString Chrome Extension.
You will see:
- use of CounterStrings
- analysing test data
- using chrome dev tools to support testing
- using proxies to support testing
If you found this useful then you might be interested in my Online Technical Web Testing 101 course.