Why Review Test Cases Before Execution?
A test case that misses a critical scenario, has ambiguous steps, or checks the wrong expected result is worse than no test case at all. It gives you false confidence. You think a feature has been validated when it hasn't been tested properly.
Test case reviews catch these problems before execution, when they are cheap to fix. After execution, a poorly written test case means wasted effort: someone spent time running steps that did not actually validate the requirement.
Common issues caught during review:
- •Missing edge cases: The test case covers the happy path but ignores error handling, boundary values, or empty states
- •Ambiguous steps: "Check that the page loads correctly" — what does "correctly" mean? A reviewer will flag this and ask for specific, observable criteria
- •Wrong expected results: The expected result describes old behavior that has since changed, or it describes the wrong behavior entirely
- •Missing preconditions: The test assumes a setup state that is not documented, causing the tester to get stuck on step one
- •Redundant tests: The test case duplicates coverage already provided by another test, wasting execution time
When reviews matter most:
- •Regulated environments: Healthcare, finance, and automotive industries often require documented review and approval of test artifacts before execution
- •UAT cycles: When business stakeholders will execute tests, the steps need to be crystal clear. A reviewer familiar with the business domain can validate this.
- •New team members: When someone new writes their first test cases, a review from an experienced team member is a fast way to align on quality standards
- •High-risk features: For features where a defect would have significant business impact (payments, security, data integrity), an extra pair of eyes on the test cases is worthwhile
Submitting a Test Case for Review
BesTest's review workflow follows a clear three-state model: Submit, Request Changes (or Approve), Done. Here is how to use it from the author's side.
Write the Test Case
Before submitting, make sure your test case is complete:
- •Name clearly describes what is being tested
- •Description provides context on the feature or behavior
- •Preconditions list everything that must be true before testing begins
- •Steps are specific, with one action per step
- •Expected results are observable and unambiguous
Do a quick self-review before submitting. Ask yourself: could someone unfamiliar with this feature follow these steps and know exactly what to check?
Submit for Review
- •Open the test case in BesTest
- •Click "Submit for Review"
- •The test case status changes to "In Review"
Once submitted, the test case enters the review queue. Reviewers on your team can see which test cases are waiting for review.
Wait for Feedback
The reviewer will either:
- •Approve the test case, marking it ready for execution
- •Request Changes, providing specific feedback on what needs to be updated
If changes are requested, you will see the reviewer's comments on the test case. Address each comment, update the test case, and submit it for review again.
What Happens After Approval
Once approved, the test case can be included in test cycles and executed. The approval is recorded in the test case's history with a timestamp and the reviewer's name. This creates a permanent record that the test case was reviewed before execution.
Batch Submissions
If you have written several test cases for the same feature, you can submit them for review individually. This lets the reviewer work through them one at a time and provide focused feedback on each.
Reviewing and Approving Test Cases
As a reviewer, your job is to evaluate whether a test case will actually validate what it claims to validate. Here is the workflow from the reviewer's perspective.
Finding Test Cases to Review
Test cases with status "In Review" are waiting for a reviewer. You can filter the test case list by status to see all pending reviews.
What to Check During Review
Work through this checklist for each test case:
| Check | What to Look For |
|---|---|
| Clarity | Can someone unfamiliar with the feature follow the steps without asking questions? |
| Completeness | Does the test cover the requirement it is linked to? Are edge cases addressed? |
| Preconditions | Is everything needed for the test clearly stated? Test data, user roles, system state? |
| Expected results | Are they specific and verifiable? "Page displays correctly" is not verifiable. "Page displays order total of $49.99 with shipping included" is. |
| Step granularity | Is each step a single action? Steps that combine multiple actions make it hard to identify where a failure occurs. |
| Requirement link | Is the test case linked to the correct requirement in the Coverage section? |
Approving a Test Case
If the test case meets your standards:
- •Open the test case
- •Click "Approve"
- •The status changes to "Approved"
The approval is recorded with your name and a timestamp. The test case is now eligible for inclusion in test cycles.
Requesting Changes
If the test case needs work:
- •Open the test case
- •Click "Request Changes"
- •Add specific, actionable feedback explaining what needs to change
Good review feedback is specific:
- •Vague: "Steps are unclear" — the author does not know what to fix
- •Specific: "Step 3 says 'enter valid data' but does not specify what fields to fill or what values to use. Please list each field and an example value."
After you request changes, the test case goes back to the author. They will update it and resubmit for review. You may go through multiple rounds of review until the test case is ready.
Review History and Audit Trail
Every review action in BesTest is recorded with a timestamp and the user who performed it. This creates a permanent audit trail for each test case.
What Gets Recorded
The audit trail captures each state transition:
| Event | Recorded Information |
|---|---|
| Submitted for Review | Who submitted, when |
| Changes Requested | Who requested changes, when, what feedback was given |
| Resubmitted | Who resubmitted, when |
| Approved | Who approved, when |
If a test case goes through multiple review rounds, every round is recorded. You can see the full history of how a test case evolved from initial draft to approved state.
Viewing the Audit Trail
Open any test case and look at its history. The timeline shows every review event in chronological order, making it easy to trace the review process.
Why the Audit Trail Matters
For teams in regulated industries, the audit trail provides documentary evidence that test cases were reviewed and approved before execution. This is often a requirement for:
- •ISO/IEC standards: Quality management systems require documented review of test artifacts
- •FDA regulated software: Medical device software testing requires evidence of independent review
- •SOX compliance: Financial systems need documented controls, including testing procedures
- •Internal audits: Even without external regulations, an audit trail demonstrates process discipline
Beyond compliance, the audit trail is useful for retrospectives. If a defect escapes to production, you can trace back through the review history to understand whether the test case was reviewed, what feedback was given, and whether the review process should be improved.
Audit Trail and Evidence
The review audit trail is separate from test execution evidence. For execution evidence (screenshots, logs, recordings), create or link a Jira issue from within BesTest and attach the evidence there. The review audit trail covers the pre-execution quality gate; the Jira-linked evidence covers post-execution documentation.
Tips for an Effective Review Process
A review process is only as good as the team's commitment to it. Here are practices that keep reviews productive without becoming a bottleneck.
Set a Review Turnaround Time
Agree on a maximum turnaround time for reviews. 24 hours is a reasonable default for most teams. If reviews sit in the queue for days, authors lose context and the process becomes a bottleneck.
Track how long test cases sit in "In Review" status. If the average exceeds your target, discuss capacity and workload with the team.
Rotate Reviewers
Avoid having one person review everything. Rotating reviewers has two benefits:
- •It spreads the workload so no single person is a bottleneck
- •Different reviewers catch different things. Someone with deep domain knowledge catches missing business rules. Someone with testing expertise catches structural issues in the test steps.
Review in Small Batches
Reviewing 20 test cases at once leads to review fatigue and declining quality. If an author submits a batch, the reviewer should work through them in groups of 3-5 with short breaks in between.
Establish Review Standards
Document what "good enough" means for your team. Not every test case needs the same level of scrutiny:
- •Critical path tests (login, checkout, payment): Thorough review with attention to edge cases and error handling
- •Low-risk tests (UI text, tooltip content): Quick review focused on clarity and basic structure
Having explicit standards prevents debates about when a test case is "done" and helps new reviewers calibrate quickly.
Do Not Skip Reviews for Urgency
It is tempting to bypass the review process when a release deadline is tight. Resist this. An unreviewed test case that misses a critical scenario costs more time than the review would have taken. If timing is a concern, do a lightweight review (focus on completeness and expected results) rather than no review at all.
Use Reviews as a Teaching Tool
For teams with mixed experience levels, reviews are an opportunity for mentorship. Senior testers can explain why they request specific changes, helping junior testers internalize quality standards. Over time, the number of review rounds per test case should decrease as the team's skills level up.
Frequently Asked Questions
Is the review workflow mandatory?
No. The review workflow is optional. You can create test cases and include them in test cycles without going through a review. However, for teams that need quality control or compliance documentation, the review workflow provides a structured approval process with a full audit trail.
Can a test case be executed while it is in review?
Test cases that are in review status have not yet been approved. The review workflow is designed as a quality gate before execution. Approve the test case first, then include it in a test cycle for execution.
How do I attach evidence like screenshots to a review?
BesTest does not support direct screenshot attachment. Instead, create or link a Jira issue from the test case and attach screenshots and other evidence to the Jira issue. This integrates naturally with your existing Jira workflow and keeps evidence accessible to the broader team.
Add Review Rigor to Your Testing
Install BesTest and start reviewing test cases with a built-in approval workflow. Free for up to 10 users.
Install BesTest Free