Test Execution
The process of running test cases and recording the actual results.
Full Definition
Test execution is the phase where testers perform the steps defined in test cases and compare actual outcomes with expected results. It's the core activity of software testing — everything else (planning, case design, environment setup) exists to support this moment when a human or script actually exercises the software and observes what happens. It's where theory meets reality.
The test execution process typically follows these stages:
1. Preparation: Set up the environment, seed test data, verify preconditions are met, and ensure the correct build is deployed
2. Execution: Follow test steps exactly as written, interacting with the application as specified
3. Observation: Carefully record what actually happens at each step, noting any deviations from expected behavior
4. Comparison: Compare actual results against expected results to determine pass or fail
5. Documentation: Record status, capture evidence (screenshots, logs, videos), and write execution notes
6. Defect Logging: Create detailed bug reports for any failures, linking them back to the test case
7. Re-testing: After defects are fixed, re-execute the failed tests to verify the fix
Test execution can be:
- •Manual: Human testers perform steps, observe results, and use judgment. Best for exploratory testing, usability testing, and scenarios requiring human interpretation.
- •Automated: Scripts execute tests programmatically, compare results, and report outcomes. Best for regression tests, data-driven tests, and tests that need to run frequently.
- •Hybrid: A combination where the test case is documented for manual execution but certain verification steps or data setup are automated. This is how most real-world teams operate.
A common mistake during test execution is rushing through steps and not documenting actual results properly. When a tester marks a test as "Pass" without noting what they actually observed, the test result loses its evidentiary value. If a defect surfaces later in production, nobody can verify what was actually tested. Similarly, testers sometimes mark tests as "Fail" without providing enough information for a developer to reproduce the issue — a failure without clear reproduction steps is almost worse than no testing at all, because it creates noise without actionable information. Another pitfall is executing tests against the wrong build or environment, leading to results that don't reflect the actual state of the software being released.
In practice, experienced testers add value beyond simply following scripts. They notice things that aren't covered by the test case — a slow page load, a misaligned button, an unexpected error in the browser console — and either log separate defects or add notes to the execution record. This observational skill is what separates a good tester from someone who merely follows steps mechanically. Teams that value this kind of attentive execution encourage testers to spend extra minutes exploring around the test case's boundaries, even during scripted execution. Many teams also time-box their execution sessions, taking breaks to avoid the fatigue that leads to sloppy testing and missed defects.
Examples
- 1.Executing the login test case with valid credentials and marking Pass after verifying the user is redirected to the dashboard with the correct welcome message and session cookie set
- 2.Running the full 200-test regression suite against the release candidate build and logging 8 failures as defects in Jira with screenshots and browser console logs attached
- 3.Automated nightly test execution in the CI/CD pipeline that runs 500 API tests and 100 UI tests, sending a summary report to Slack with pass/fail counts and links to failed test details
- 4.Re-executing 5 previously failed test cases after the development team deployed fixes, verifying each defect is resolved and marking the tests as Pass with "re-test after BUG-xxx fix" notes
- 5.Executing a checkout test case that requires specific test credit card numbers, verifying the payment processes successfully and the order confirmation page shows the correct total, tax, and shipping amounts
- 6.Running a blocked test case that was previously waiting on an environment issue — the tester updates the status from Blocked to In Progress, executes the steps, and records the final result
In BesTest
BesTest provides a dedicated execution interface where testers work through test case steps and mark results as Pass, Fail, Blocked, or Skipped. When a test fails, testers can create a linked Jira defect directly from the execution screen, automatically carrying over the test context. All execution results are tracked in real-time on the dashboard.
Related Terms
Test Case
A documented set of conditions and steps used to verify that a software feature works as expected.
Test Cycle
A single iteration of testing a specific set of test cases, typically associated with a release or sprint.
Defect (Bug)
A flaw in the software that causes it to behave incorrectly or unexpectedly.
Expected Result
The anticipated outcome of a test case step or the test case as a whole.
Precondition
The required state or setup that must exist before a test case can be executed.
Test Automation
Using software tools to execute tests automatically without manual intervention.
Test Environment
The hardware, software, network, and configuration setup where tests are executed.
See Test Execution in Action
Experience professional test management with BesTest. Free for up to 10 users.
Try BesTest Free