We recently started using Behave (github link) for BDD of a new python web service.
Is there any way we can get detailed info about the failure cause as tests fails? They throw AssertionError
, but they never show what exactly went wrong. For example the expected value and the actual value that went into the assert.
We have been trying to find an existing feature like this, but I guess it does not exist. Naturally, a good answer to this question would be hints and tips on how to achieve this behavior by modifying the source code, and whether this feature exists in other, similar BDD frameworks, like jBehave, NBehave or Cucumber?
Today, when a test fails, the output says:
Scenario: Logout when not logged in # features\logout.feature:6
Given I am not logged in # features\steps\logout.py:5
When I log out # features\steps\logout.py:12
Then the response status should be 401 # features\steps\login.py:18
Traceback (most recent call last):
File "C:\pro\venv\lib\site-packages\behave\model.py", line 1037, in run
match.run(runner.context)
File "C:\pro\venv\lib\site-packages\behave\model.py", line 1430, in run
self.func(context, *args, **kwargs)
File "features\steps\login.py", line 20, in step_impl
assert context.response.status == int(status)
AssertionError
Captured stdout:
api.new_session
api.delete_session
Captured logging:
INFO:urllib3.connectionpool:Starting new HTTP connection (1): localhost
...
I would like something more like:
Scenario: Logout when not logged in # features\logout.feature:6
Given I am not logged in # features\steps\logout.py:5
When I log out # features\steps\logout.py:12
Then the response status should be 401 # features\steps\login.py:18
ASSERTION ERROR
Expected: 401
But got: 200
As you can see, the assertion in our generic step clearly prints
`assert context.response.status == int(status)`
but I would rather have a function like
assert(behave.equals, context.response.status, int(status)
or anything else that makes it possible to generate dynamic messages from the failed assertion.
Behave scripts can be debugged by dry running the test steps. The dry run helps to go over all the test steps without actually running it. It helps to determine the un-defined steps in the step definition file. It verifies if there are any missing import statements, syntax errors, and so on.
You can run a feature file by using -i or --include flags and then the name of the feature file. For more information check the documentation for command line arguments. There's a lot of useful information hidden in their appendix section. NOTE: At the time I'm writing this it won't work with Python 3.6 and Behave 1.2.
Developers describe behave as "A Python library to implement BDD tests". It is behaviour-driven development, Python style. It uses tests written in a natural language style, backed up by Python code. On the other hand, Cucumber is detailed as "Simple, human collaboration".
Instead of using "raw assert" statements like in your example above, you can use another assertion provider, like PyHamcrest
, who will provide you with desired details.
It will show you what went wrong, like:
# -- file:features/steps/my_steps.py
from hamcrest import assert_that, equal_to
...
assert_that(context.response.status, equal_to(int(status)))
See also:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With