Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Recording mouse click events for GUI testing. What is more reliable than pixel coordinates?

I have been writing some GUI test frameworks that can record and replay some GUI user scenario by recording mouse and keyboard events and replaying them.

The mouse events are currently recorded as (press or release, (x, y)). However, this is very fragile because if only the destination widget moves by a few pixels, but the structure and everything else stays the same, the testcase ceases to work.

What is a better approach to do this? Some things that I can think about

  • Record the "tree path" of the destination widget in the tree of widgets and their parent widgets. I.e. (press or release, (top level, first child, second child, destination)), where the "child list" is what is returned by Qt's QObject child list. I think this has the drawback that the test now is dependent on the internal code structure.

  • Give every testable widget a unique name and when replaying search the widget with that name. This seems to be a non-negletible overhead.

Any other ideas, and what is the commonly accepted "best" approach to this?

like image 782
Johannes Schaub - litb Avatar asked Feb 10 '14 12:02

Johannes Schaub - litb


1 Answers

The Specifications

The decision is yours to what extent are the test cases bound to the particular setup of a test, and to changes in code. It's essentially a question of how tight are your specifications.

On one hand, there are "mechanical", or "dumb" tests. It may be that you're testing with tightly controlled initial conditions: the same settings for initial window positions are pre-set before the test, the same platform style is enforced, the same fonts are available, etc. It is then a reasonable expectation that if you switch two buttons around in a widget, or change the initial window/dialog size, the tests are supposed to fail.

On the other hand, there are "human" tests. You may wish for the tests to succeed if a human reading a script from paper would succeed at the test. In such case, the minor changes such as fonts, location of visual elements, etc., are not important: human testers readily adapt to such changes.

Those two extremes may even apply all at once, but to different parts of the application, or to different stages of the product lifecycle.

If you're designing a UI for an aerospace flight management system, there are aspects that may require a fully "mechanical", non-adaptable tests, since any changes to the UI that are not covered by specification changes would be in fact a bug.

If you're designing a consumer application, you may wish to keep the specification tight across bug fix or minor releases, but could relax this for major releases, for example.

Cooperation from the Test Cases and Tested Code

In implementing more flexible, more human-like tests, one needs some cooperation from either the tested code, or test case generation process, or both.

The test case generation process (a test script, a human, etc.) has important knowledge about the meaning of a particular event. For example, a click on a generic button is really destined for the middle of the button - then it doesn't matter if the button's active area has rounded corners that don't react to clicks. A click may also be destined to a button labeled "OK", no matter if that button is on the right or left edge of a button bar on a particular platform.

The tested code also has important knowledge about classification of a particular event. For example, a click's coordinates may be important by themselves if it's a click on the canvas of a painting program. Otherwise, it may be a particular visual element within a widget that is important, not its exact coordinates. In this latter case, changes to the appearance of the widget due to platform styling or code updates may make the relative coordinates obsolete.

like image 155
Kuba hasn't forgotten Monica Avatar answered Oct 06 '22 18:10

Kuba hasn't forgotten Monica