I am running into a bit of a conundrum. We have an antiquated system that I am writing Behat tests for. It works great for the most part. But I have noticed an issue where the Behat tests will fail if the data I am testing against the current environment was meant for/pulled from a different environment.
For example, if I test a search by phone function in QA and expect it to return a specific entity id I cannot use that same phone number and entity id to test in RC or Live. So I would like a manageable way to maintain the testing data for each environment in Behat.
A couple thoughts have been throw around here such as putting the data into the profile (highly undesireable) or creating CSV files for each feature. I also am thinking about building all the data-specific scenarios using tables or scenario outlines and having an environment column that will be used to check against the current environment and skip when the row is not for the current environment. Maybe using a Background or some other hook to help out with this.
Does anyone know of a good way or best practice for dealing with multiple environments with different data sets in each with Behat?
Behat creates a context object for each scenario and executes all scenario steps inside that same object. In other words, if you want to share variables between steps, you can easily do that by setting property values on the context object itself (which was shown in the previous example).
Behat is a PHP testing framework which can be used to automate acceptance tests in a human readable language called Gherkin. Since it is based on Cucumber, it also helps teams to adopt and implement Behavioral Driven Development (BDD).
Behat is an open source Behavior-Driven Development framework for PHP. It is a tool to support you in delivering software that matters through continuous communication, deliberate discovery and test-automation.
According to the folks at KNP Labs during one of their trainings, best practice is to create necessary data for a scenario to succeed as a part of the Given or Background so you end up with a step that reads "Given I have 7 phone numbers" and the step definition inserts seven phone numbers that can be relied on for the rest of that scenario.
Of course, that's often not feasible if you wish to run tests against a production site, and the strategies I've seen really vary depending on the amount of specific data involved and how volatile the data is on production.
Since best practice also dictates that the feature files should describe the application behavior in terms the feature beneficiary can reasonably be expected to understand, it's unlikely that anything exposing environment-conditional data in the feature file would be an optimal approach. The target feature user probably isn't aware of the varying environments.
If the data on production is stable enough to write tests against, I'd consider setting a parameter or profile in the behat.yml which could be used to indicate the environment at run-time and write a custom step definition. The custom step definition could supply known production values in one case and insert those values in the others. And the Gherkin would still look like "Given I have 7 phone numbers" so that the feature would focus on the business value and the benefit to the user and not the testing environment.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With