Skip to main content

Data-Driven Testing

Data-Driven Testing allows you to execute a single trace multiple times using different input data.

G
Written by Georgij Lazarevski
Updated today

Each row of the test data represents one iteration of the trace, enabling efficient testing of multiple scenarios without duplicating traces.

Test data can be provided using:

  • JSON

  • Excel (.xlsx)

  • CSV

  • Manual JSON input

The trace will automatically iterate through all rows of the provided data until the final iteration is completed.

After execution, a report is generated that can be accessed from:

  • The Trace Session History page

  • The Last Report icon on the All Traces page

  • The Last Result button in the Editor


How Data-Driven Testing Works

When test data is added to a trace and the Enable Test Data checkbox is checked:

  • The trace becomes iterable.

  • Each row of the dataset is mapped to parameters in the trace steps.

  • Each row represents one execution unit (iteration).

Inside the Editor

When working with a Data-Driven trace inside the Editor:

The Editor is primarily used for creating, configuring, and validating the trace—not for full-scale execution.

When Test Data is enabled:

  • You can preview how parameters are mapped to steps

  • Test data values can be injected into steps for validation purposes

  • You can step through the trace to verify that:

    • Parameters are correctly mapped

    • Steps behave as expected with sample data

This mode is intended for:

  • Setting up parameter mappings

  • Debugging individual steps

  • Validating test data structure

  • Verifying trace logic before running at scale

It is not intended for executing all iterations of a dataset.

Execution Behavior in Background / CI / Scheduled Runs

When the same trace is executed outside the Editor (e.g., background runs, scheduled runs, CI):

  • Each row of test data is treated as a separate iteration.

  • The system creates multiple child traces, one per iteration.

  • Iterations are executed in parallel using multiple virtual machines (VMs / workers).

  • The number of parallel executions depends on the Maximum Concurrency for Builds setting in Preferences.

Example

If you have:

  • 15 test data rows

  • Maximum concurrency = 15

  • All VMs are available

Then:

  • 15 child traces are created

  • Each runs on a separate VM

  • All executions happen in parallel

  • Total execution time ≈ time of a single iteration (e.g. ~1 minute instead of 15+ minutes)

If Concurrency is Lower Than Dataset Size

If you have:

  • 15 iterations

  • concurrency = 5

Then:

  • Only 5 VMs run at a time

  • Remaining iterations are queued

  • Execution continues in batches until all iterations complete


Adding Test Data in the Editor

Test data is configured in the OPEN URL step, which is always the first step of a trace.

The Test Data section contains the following controls:


1. Enable Test Data

The Enable Test Data checkbox activates Data-Driven Testing.

  • This checkbox is automatically enabled when test data is added.

  • When enabled, the trace runs multiple iterations using the dataset.

  • When disabled, the trace runs once using the default values recorded in the trace steps.


2. View / Edit Test Data

Click VIEW / EDIT TEST DATA to open the test data editor modal.

The modal includes:

  • Example JSON structure to guide data input

  • VALIDATE JSON – checks if the entered JSON is valid

  • SAVE – stores the dataset for the trace

  • CANCEL – closes the modal without saving

Example JSON format:

[   
{"username": "user1@example.com","password": "password123"},
{"username": "user2@example.com","password": "password456"}
]

Each object represents one iteration of the trace.


3. Download Test Data

Users can export the dataset using the DOWNLOAD menu.

Supported formats:

  • JSON

  • Excel (.xlsx)

  • CSV

This is useful for:

  • Sharing datasets

  • Editing test data externally

  • Creating larger datasets in spreadsheets


4. Clear Test Data

The CLEAR button removes all test data from the trace.

Once cleared:

  • Data-Driven Testing is disabled

  • The trace returns to single execution mode


Running a Data-Driven Trace

When a trace with Test Data is executed, the behavior depends on where the trace is run.


Using the Editor (Preparation Only)

The Editor should be used to:

  • Record and modify trace steps

  • Configure and manage test data

  • Map dataset parameters to trace steps

  • Validate logic using sample data

While step-by-step validation is available, the Editor is meant for setup and verification, not for running full Data-Driven executions.

Running in Background / CI / Scheduled Runs

  • Each row of test data is executed as a separate iteration (child trace).

  • Iterations are distributed across multiple virtual machines (VMs / workers).

  • Execution happens in parallel, based on the configured concurrency.

Execution Flow

  1. The system reads all test data rows.

  2. A child trace is created for each row.

  3. Iterations are assigned to available VMs.

  4. Multiple iterations run simultaneously.

  5. Remaining iterations are queued if no VM is available.

  6. All results are aggregated into a single report.

Execution Time Behavior

  • High concurrency (≥ number of iterations):
    All iterations run in parallel → total time ≈ single trace execution time

  • Limited concurrency:
    Iterations run in batches → total time depends on queueing


Viewing Test Results

In All Traces page you can open the report by clicking Results icon in for a specific and get details in three different places for that trace or going to the Dashboard clicking View Build History and get details for all the Project's traces:

  • All Traces page → Results icon

    • Summary - you can see last iteration executed and its parameters

    • Test Data Run - all iterations/sessions for that execution of the trace

    • Trace Session History - all sessions

  • View Build History on Dashboard
    You can see the list of the each browser session (including iterations if the trace was with test data enabled or a normal trace without test data)


    Each row contains:

    • Build

    • Build Date

    • Number of tests failed or passed

    • Execution time

    • Results Icon


    On the Results icon we can see the details for that build and also if it was a browser session with test data iterations we have links for each browser runs that will redirect to Test Data Run table where are all the details for each iteration of that trace run


Results Overview from All Traces


The Summary / Overview tab displays:

  • Last iteration

  • Total execution duration

  • Total number of iterations

  • Test Data Run ID

  • Parameters used on this last iteration

Test Data Run Tab

The Test Data tab shows a table containing the results of every iteration for that specific run (Test Data Run ID)

Each row contains:

  • Status (Passed / Failed)

  • Test Variables (parameters used for each iteration)

  • Date

  • Browser

  • Execution time

  • Results Icon

Clicking the Results Icon displaying will be displayed the "Trace Build Report Details" modal with tabs that will contains data only for that specific iteration run (browser session)


Email Reports

If email notifications are configured, a report is sent after execution.

The email includes:

  • Total iteration execution duration

  • Number of failed iterations for each browser if we have configured both

If all iterations pass, the email simply reports that the trace succeeded.


The email also includes the links that redirects to the Test Data Run view where we can see all the iterations for the specific Run and browser.

Chrome:



Firefox:



Parameter Mapping

When using Data-Driven Testing, the parameter names used in the trace must match the names in the test data.

The mapping works as follows:

As you can see in the sc above on Preview we have:

{
"password": "secret_sauce",
"user-name": "standard_user"
}

That is the first test data row and we can use for the mapping in the steps like below:


JSON Test Data

For JSON datasets, the key names of each object act as parameter names.

Example:

[   
{"username": "user1@example.com","password": "password123"},
{"username": "user2@example.com","password": "password456"}
]

In this case:

  • username → maps to the username parameter in the trace

  • password → maps to the password parameter in the trace

Each object represents one iteration of the trace.


Excel / CSV Test Data

For Excel (.xlsx) or CSV files, the first row header cells define the parameter names.

Example spreadsheet:

username

password

password123

password456

In this case:

  • The column headers (username, password) are used as parameter names.

  • Each row below the header becomes one iteration of the trace.


Important

The parameter names in the dataset must match the parameter names used in the trace steps, otherwise the values will not be injected during execution.

Did this answer your question?