Expected Result
The anticipated outcome of a test case step or the test case as a whole.
Full Definition
The expected result defines what should happen when a test case is executed correctly. It's the benchmark against which actual results are compared to determine pass or fail. Without a clear expected result, testing becomes subjective opinion rather than objective verification — two testers could execute the same test and reach different conclusions about whether it passed.
Characteristics of good expected results:
- •Specific: "Dashboard displays user name 'John Doe' in the top-right corner" not "Page looks correct"
- •Measurable: Can be objectively verified by any tester — no room for interpretation
- •Unambiguous: Only one possible interpretation of what "pass" looks like
- •Complete: Covers all relevant outcomes including visual state, data changes, and system behavior
- •Observable: The tester can actually see or measure the result (not just "database updated correctly" unless they can verify it)
- •Time-bound when relevant: "Email arrives within 5 minutes" or "Page loads in under 2 seconds"
Expected results can be defined at different levels:
- •Per step: What should happen after each individual action — useful for complex workflows where intermediate states matter
- •Per test case: The final outcome after all steps are completed — useful for simpler tests where only the end state matters
- •Implicit: Some expected results are understood (e.g., "no error messages appear") but it's better to make them explicit
The comparison of expected vs. actual results is the core mechanic of test execution:
- •Match → Pass: The actual behavior matches the expected result
- •Mismatch → Fail: The actual behavior deviates from the expected result — log a defect
- •Cannot Determine → Blocked: Something prevents the tester from reaching the point where the comparison can be made
Vague expected results are one of the most pervasive problems in test case quality. Results like "system works correctly," "no errors," "page displays properly," or "data is saved" are nearly useless because they mean different things to different testers. What does "correctly" mean? What counts as an "error"? What does "properly" look like? These vague statements lead to inconsistent testing — one tester might pass a test that another would fail — and missed defects that hide behind ambiguous acceptance criteria. The fix is straightforward but requires discipline: every expected result should be specific enough that two different testers would independently reach the same pass/fail conclusion.
Another common mistake is writing expected results that only cover the happy path. A good expected result for a form submission should cover not just what happens when the form is submitted (e.g., "Success message appears"), but also observable side effects: was the data actually saved? Did the confirmation email get sent? Was the user redirected to the correct page? Did the page URL change? Testers who only check the most obvious outcome miss subtle defects. For example, a form might show a "saved successfully" message but silently fail to write to the database — if the expected result only says "success message appears," the tester marks it as Pass and the defect slips through.
In practice, teams that write excellent expected results do a few things consistently. They reference specific text, values, and UI elements rather than general descriptions. They include negative expectations where relevant ("no error messages in the browser console," "no duplicate records created"). They state timing expectations for asynchronous operations. And they review expected results as part of test case peer review, asking "would a new tester know exactly what to look for?" The time invested in writing precise expected results pays off many times over through more consistent, reliable testing.
Examples
- 1.Expected: User is redirected to the dashboard URL (/dashboard) with a welcome message displaying "Hello, John" and the current date in the user's local timezone
- 2.Expected: Error message "Invalid email format" displays in red text below the email input field, the field border turns red, and the form is not submitted (no network request fires)
- 3.Expected: Order confirmation email arrives within 5 minutes containing the correct order number, item list, total amount, and a tracking link that resolves to the carrier's website
- 4.Expected: After saving the profile, the page refreshes and displays the updated name "Jane Smith" in both the profile header and the navigation bar avatar tooltip, and the database record shows the new name
- 5.Expected: The pagination control displays "Showing 1-25 of 312 results," the first page is highlighted, and clicking "Next" loads items 26-50 without a full page reload
- 6.Expected: The 404 error page displays with the correct branding, a "Return to Home" link that works, and a search bar. The HTTP response code is 404 (not 200 with error content).
- 7.Expected: The CSV export file contains exactly 1,000 rows (matching the filter), with headers matching the column names on screen, and all date values formatted as YYYY-MM-DD
In BesTest
BesTest provides a dedicated expected result field for each test case step, ensuring that every action has clear, verifiable success criteria. During execution, testers compare actual outcomes against these expected results and mark each step as Pass or Fail. The review workflow also validates that expected results are specific enough before a test case is approved for execution.
Related Terms
Test Case
A documented set of conditions and steps used to verify that a software feature works as expected.
Test Execution
The process of running test cases and recording the actual results.
Precondition
The required state or setup that must exist before a test case can be executed.
Defect (Bug)
A flaw in the software that causes it to behave incorrectly or unexpectedly.
Test Coverage
A measure of how much of the software or requirements are tested by test cases.
See Expected Result in Action
Experience professional test management with BesTest. Free for up to 10 users.
Try BesTest Free