We have recently started using BDD to write our requirements. It's been really helpful, it made the communication between analysts and developers a lot easier. (Combined with user interfaces, and old school requirements)
Now we are thinking about writing our test cases with BDD. When I search online for the best practices I see a lot of different variations on how to write it.
There are some examples like:
Problem is almost all examples are for very simple cases, on the other hand we'd like to write scenarios that includes multiple actions, multiple system outputs (warnings, errors etc.) and multiple outputs.
We are trying to figure out best way to write BDD for following scenario:
We want user to do following actions:
The reason we have such a long story is that this is a common scenario that can happen and we want to make sure that users are able to go back to happy path.
What do you think is the best way to handle such scenario using BDD?
BDD scenarios describe test cases in a plain-text form, and though they use Gherkin keywords, they can be created by non-technical-savvy employees. Quite often, they are written by product managers or subject matter experts, and are automated by the QA team or special automation engineers.
I'm going to try and rephrase what you're asking for here, in the hopes that it will clear some things up.
We have recently started using BDD to write our requirements... Now we are thinking about writing our test cases with BDD.
We've recently started to use examples to clarify our requirements... now we're thinking of automating those examples.
When I search online for the best practices I see a lot of different variations on how to write it.
When I search online, I see a lot of different variations of context, event and outcome.
(It's not just in the writing; it's in the conversation that leads to the writing. This is why you get variation; because conversation is really fuzzy.)
Problem is almost all examples are for very simple cases
The problem is that back in the old days, early adopters like me used things like login as an example.
We were wrong to do so. Simple examples don't actually help you understand BDD. The whole beauty was that when we talked to the stakeholders who understood the problem (who might be security or infrastructure experts, for example; it doesn't just apply to the business experts), we learned something. Here's a talk on the things that we did wrong back in the early days of BDD; you're encountering the cost of some of those. Sorry.
I wrote a whole blog post on the 3 aspects of BDD: Exploration, Specification and Test by example. Most people focus on the 2nd and 3rd of those, but the 1st is implicit. Exploring is important, and conversations around scenarios are a really cheap way to do that!
We'd like to write scenarios that includes multiple actions, multiple system outputs (warnings, errors etc.) and multiple outputs... The reason we have such a long story is that this is a common scenario that can happen and we want to make sure that users are able to go back to happy path.
We want to check full customer journeys to make sure that our system is at least usable, no matter what else happens.
So, if you're wanting to use BDD tools like Cucumber to write a whole, full-stack, automated customer journey, rather than a single example of one aspect of behaviour (what we call a scenario), then... it's not BDD.
However, it is still a really good idea. It's not BDD, but it doesn't mean it's a bad thing. I've worked with a number of orgs who've done this and benefited from it. (Maybe it should have a name.)
Here are the hints and tips I can give you based on that experience:
Do not use these as regression tests! Trying to go through every journey is an exponential 2^n effort; forget it. Pick a few journeys (3 per ideal session seems pretty typical) and try to pick different, but typical, customer choices. Don't use these to test the edge-cases. You're just checking that your main customer journeys are still stitched together.
Declarative over Imperative still rules. Avoid talking about the UI; phrase the journey in terms of what's being achieved at each stage.
If you can do this, you get to reuse your steps from your smaller scenarios. Put your customer journeys (sometimes referred to as "smoke tests") in a separate place, even if they're run in the same part of the build. Run them all first, until you don't need to any more (a month or so of these breaking will make the team fix the root cause, environment issues, etc!).
Be specific. It's not just "a user"; it's Sue, the girl down the road who's using your polygons on her map to try to spot Pokemons she hasn't caught yet. Specific stories really catch people's imagination and make the journeys memorable. Make different journeys match different personas, if you can.
Often the "then" of one scenario forms the "given" of another one with a different aspect of behaviour. If you're stringing them together, don't worry about the "then". You don't need to check for the outcome if you're about to use it in the next step. For instance, if a menu needs to show a particular choice, don't check for the choice; just use it and assume it's there. UI checks can be expensive and with these longer journeys we should be in a place where these are generally passing. If they're not, it's pretty trivial to add the missing steps for the period in which we're working out why they're broken. Usually these are integration tests more than anything; checking that particular services are connected, etc., before the longer scenario suite is run.
If your common customer journey includes users being confused, doing the wrong thing or otherwise wasting their time, change your UI. User Experience expertise is still really, truly important, and isn't really part of BDD, since it's hard to come up with concrete examples for "easy" or "forgiving" compared to comparisons and suggestions for UI. BDD isn't a silver bullet.
It's very common to have the artifacts from the conversations around full customer journeys written down or even spread across an entire wall of an office. The automated versions, however, are normally created after the smaller scenarios have been completed and the functionality is working.
There is normally duplication between the full, end-to-end customer journeys and the smaller scenarios that cover aspects of behaviour like edge-cases. The end-to-end journeys provide fast feedback and ensure that nobody's time is being wasted; the smaller scenarios provide documentation on how the system should behave. Duplication in this instance is OK.
If you decide that you want this to be a full journey, here's the kind of thing I'd expect to see (and all I'm doing here is the "declarative vs. imperative" thing):
Given Sue's registered to catch Pokemons
And Bulbasaurs, Koffings and Pikachus were caught in Trafalgar square this year
When she filters for Pokemons caught between January and July
And adds a filter for "Poison" traits
And filters for "Bulbasaur"
When she searches for Pokemons
Then she should be asked to select an area of the map
When she selects an area around Trafalgar Square
Then she should be shown the Bulbasaur density
But not the Pikachu or Koffing density.
Use specific examples. It's much easier to understand and see flaws in the above, or on my understanding of Pokemon Go (which I haven't yet played) when it actually has realish ideas in it. That's something in common between these journeys and smaller scenarios.
You'll also see that there are many, many "whens", and they all feed into each other. If we were discussing single aspects of behaviour, each of these would be prefaced by a "given" outlining the context of what came before, and the outcome which allowed the next "when" would be the "then". In this case though we're chaining them together. Uninterrupted sequences of "whens" are very common and completely OK in these kinds of journeys, so long as you respect that this is not looking at a single aspect of behaviour, nor providing examples of it (so it's not really BDD). "Thens" mid-journey appear when the outcome is an important part of the journey, particularly providing non-specific guidance which the user has to respond to specifically.
Do not automate these with misunderstandings in place! Automated customer journeys represent a significant investment (though they're pretty easy to put together once you have smaller scenarios covering the same functionality). Get the functionality working first, and show it to the relevant stakeholders. You don't want to invest heavily in things that are likely to change with learning and feedback.
Hope this is helpful, and thanks for making me think this through!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With