Survey of glue code in BDD tools


There are a lot of Behavior Driven Development (BDD) tools available in practice. Some of them allow specification as plain text (for example in Given...When...Then format, i.e., using Gherkin syntax) which is then connected through ‘glue‘ code to underlying tests or actual implementation. On the other hand, some tools allow specification as a code with added annotations to reduce re-writing of natural language specifications. The characteristics of such ‘glue‘ code and annotations are not yet studied. It is also not clear how much ‘glue‘ code is auto-generated and how much must be written manually.


If the amount of ‘glue‘ code is equal or greater than (what are the parts of the glue code? LOC of glue code in different cases against LOC of what it connects to?) the amount of the actual implementation code, then the approach is wrong. It means an added amount of work for developers because non-technical stakeholders may not be experts for writing such ‘glue‘ code.

Research questions

  1. What are the common and unique characteristics of the BDD specifications?
  2. What parts of specifications go to ‘glue‘ code? How often?
  3. What are the characteristics of the ‘glue‘ code? Any commonalities across different tools?


  • Compiling a list of BDD tools from literature and the internet (i.e., StackOverflow tags/questions)
  • Analysis of shortlisted tools across several parameters, especially to analyze the ‘bridge‘ or ‘glue code‘ characteristics

Can be extended as a bachelor thesis!

Last changed by nitish on 29 July 2020