A Conversation about Testing within BDD

Developers and testers having a conversation about behavior-driven development

A lot of processes were commoditized when agile became mainstream—Scrum, kanban, behavior-driven development (BDD), and even agile itself. I have gained some fairly in-depth experience with BDD over the past year. When I started researching BDD, the emphasis was on conversation—specifically, conversation structured around a “given-when-then” format. Actually trying to use this format has caused me to develop some ideas around BDD that are exactly the opposite of what the community values.

When I talk with BDD practitioners who claim to be serious about BDD implementations and the value it brings, they say conversation is the most important part of the process. The idea is that any change can begin from a basic set of examples that describe the current state, an action that is supposed to occur, and what results we should expect.

But when I try this, I have an impossible time sticking within the given-when-then convention. The team I work with largely takes a different strategy. We start a change by talking about how we might test it: what things should we test at the unit level, what services should we test and how, and what might be done through the browser.

Sometimes I will stub out a test in Cucumber, write any needed Page Object updates or new steps, and see where it fails. The failure is either our missing implementation or a misunderstanding of the current implementation. We continue test-driving our change, mostly at the unit level, using a red/green refactor flow. Toward the end of the development cycle, we revisit our test coverage. I ask about what coverage we have and what coverage is missing or would be better done at another level in the stack. Often, the missing areas of coverage at this point are the tests that end up written in the BDD convention and run against a browser.

When we build tests this way, the plain-text convention of Cucumber seems superfluous. Business people aren’t using it to make or review tests, and it’s an extra layer of abstraction to maintain. I complained about this for a long time, but I’ve recently seen one use. If you are delivering software frequently in small batches, these tests will fail eventually. Each failure in a CI run displays the test and the test step that failed. The plain-text convention makes it simple to know where to begin exploring when I see a failure in CI. Rather than seeing a line number and an exception, I see a step and a description of value the customer expects from the software.

Much like Scrum, I think BDD is a set of training wheels. The given-when-then format can force thinking about how customers will use your software when that conversation isn’t happening, or when your team doesn’t know where to start. If you have teams that are capable of talking with each other and testing the changes they implement, then you probably don’t need to constrain how conversation happens.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.