Designing Data Models for Self-Documented Tests

Test engineers analyzing data models

When we work with software applications that span stacks from top to bottom and from OS systems to web apps across multiple devices, writing automated tests for these applications can be very complicated. Automated tests often have diversified implementations and use various tools, so documenting and interpreting test results become even more of a challenge.

Data models can enable us to collect and process test data more dynamically and uniformly.

These models require self-documented tests implemented in different programming languages, built and deployed through continuous integration and deployment, and running on virtual machines or local desktops with specific configurations, in order to produce test information and results that can be collected, processed, and generated for homogeneous reporting.

Test data models have three parts: definitions, executions, and results. Definitions describe test scenarios and implementations, test executions capture test runtime data and configuration information, and results are the generated execution data that has been processed and presented in an organized way.

Data models are in a JSON format that can be easily stored in databases, repositories, or cloud storage, regardless of test applications’ implementations or execution environments.

To design effective data models for self-documented tests, there are three important things to consider: what to document, collect, and report. 

1. What to document

First and foremost, you have to think about what we are testing. This includes definitions describing test scenarios, implementation details and related documents, test specifications, feature-tracking systems, and agile stories and sprints. Test definitions are static data that are typically embedded in the source code. Document not only what is tested, but more importantly, how tests are implemented.

2. What to collect

Execution data are runtime data dynamically collected during and at the end of each test run. The usual test cases with pass/fail rates and error messages are not enough to describe the dynamics of test executions and testing environments. Execution data not only capture the passing or failure of test cases, but also test execution time, resources, and environment information during the execution. 

3. What to report

Reporting data are selected data elements from the definition and execution data, depending on the types of testing. With data stored in JSON format, collecting results, processing and aggregating data, and generating reports becomes simple. Various types of reports can focus on test results with pass/fail data, specific execution data, or performance. Reports can be generated with third-party data generation and analytic tools to transform data into different presentations.

With the amount of data collected during test executions, you should get a rich set of data to process and present.

When it comes to choosing which languages to implement, what database and storage to use for results, and where reports will be published, it will depend on your applications, development processes, and environment capacities. For data models, there is no limit.

Mimi Balcom Meng is presenting the session Creating Self-Documenting, Reportable, DevOps-Driven Tests at STARWEST 2019, September 29–October 4 in Anaheim, California.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.