13) How to Tell If Your Verification Plan Will Fail (Before Testing Begins)

by | May 1, 2026 | Uncategorized | 0 comments

Verification failures are rarely surprises.

When a test fails late in development, teams often treat it as an execution issue. Attention shifts to the method, the setup, or the result itself. In most cases, the underlying problem was introduced much earlier.

Many teams believe they have a verification plan. They have requirements, a trace matrix, and a list of intended tests. What is often missing is a clearly defined approach for how each requirement will be proven.

That gap is where most verification problems begin.

 

Most Teams Don’t Actually Have a Verification Plan

A typical development program can produce a complete-looking verification package:

  • Requirements are documented
  • A traceability matrix is in place
  • Tests are identified

This creates the appearance of readiness. The missing piece is definition at the requirement level. For each requirement, there must be a clear, agreed method for demonstrating compliance.

When that level of clarity is absent, the work of defining how to test the product has simply been postponed.

 

 

The Defining Failure: Test Strategy Comes Too Late

In many programs, requirements are written first and test methods are developed later. This sequence creates a disconnect. Requirements are drafted without a clear understanding of how they will be evaluated, and test methods are forced to adapt after the fact.

Strong programs close this gap early. For every requirement, the team can answer:

  • How will this be tested?
  • Under what conditions?
  • With what setup or equipment?
  • What defines pass and fail?

If these questions cannot be answered when the requirement is created, the requirement remains incomplete and the verification plan is already compromised.

 

 

The Warning Signs You Can See Before Testing Starts

These issues often appear independently. They share a common origin in delayed test strategy definition.

Requirements Are Not Truly Testable

Requirements include vague or qualitative language. Terms such as “adequate,” “intuitive,” or “robust” appear without measurable definition. Different functions interpret the same requirement in different ways.

This lack of precision reflects the absence of a defined test approach during requirement development.

 Test Methods Are Still Evolving Late

Test protocols continue to change as execution approaches. Teams debate how to simulate use conditions or what constitutes a valid setup.

At this stage, the program is still defining how performance will be evaluated instead of preparing to confirm it.

Acceptance Criteria Are Unstable

Pass and fail thresholds are debated after results are generated. Criteria shift to accommodate observed performance.

This introduces subjectivity into a process that depends on consistency and pre-defined expectations.

Traceability Exists, but Coverage Is Weak

The trace matrix appears complete, but multiple requirements map to the same generic test. Coverage lacks depth, even when documentation appears thorough.

This often reflects traceability developed after the fact rather than built from a defined test strategy.

Edge Conditions Are Missing or Vague

Nominal performance is defined, but worst-case conditions are unclear or absent. Environmental extremes, user variability, and boundary scenarios are not fully specified.

These gaps persist into execution and surface as late-stage issues.

 

 

A Simple Pre-Verification Readiness Check

Before testing begins, a verification plan should withstand a straightforward review:

  • Can every requirement be tested in one clear, objective way?
  • Is the test method defined and agreed for each requirement?
  • Are acceptance criteria fixed before any data is generated?
  • Are worst-case conditions explicitly defined?
  • Would two independent teams design the same test approach?

If the answer to any of these is no, the plan is not ready.

 

 

What Strong Verification Planning Looks Like

Strong verification planning is deliberate and front-loaded.

Test strategy is defined alongside requirements. Requirements are written with a clear understanding of how they will be evaluated. Acceptance criteria are established in advance. Test conditions, including edge cases, are explicitly defined.

When this foundation is in place, execution proceeds with fewer surprises and clearer outcomes.

 

 

The Real Cost of Getting This Wrong

When test strategy is defined late, the impact extends beyond the lab.

Teams revisit requirements, revise test methods, and repeat testing cycles. Schedules slip and costs increase. Confidence in the program begins to erode.

Under pressure, teams may begin to explain results rather than resolve them. When failures are interpreted or justified after the fact instead of addressed through design or requirement clarity, the verification record becomes harder to defend and introduces unnecessary regulatory risk.

These outcomes are driven by decisions made well before testing begins.

 

 

Conclusion

Verification depends on decisions made early in development.

When test methods are still being defined, acceptance criteria are shifting, or results require interpretation during execution, the plan was not fully established at the outset.

Clear, early definition of how each requirement will be evaluated creates a more stable and predictable verification process.

 

 

Free Verification Planning Assessment

If your verification phase feels unpredictable, the issue may not be testing capability. It may be how the work was defined.

A65 works with development teams to evaluate requirement quality, test strategy definition, and verification readiness before execution begins. A focused assessment often reveals where test methods are being deferred, where acceptance criteria are not fully established, and where gaps will surface late in development.

These issues are rarely obvious during planning, but they consistently drive delay, rework, and risk during verification.
If you would like an objective review of your current verification approach, we welcome the conversation.

Email: sdonnigan@a65consulting.com
Or schedule your review online

 

 

References

ISO 13485:2016 – Medical devices – Quality management systems – Requirements for regulatory purposes

ISO 14971:2019 – Medical devices – Application of risk management to medical devices

21 CFR Part 820 – Quality System Regulation (U.S. FDA)

FDA Guidance – Design Control Guidance for Medical Device Manufacturers (1997)

FDA Guidance – Applying Human Factors and Usability Engineering to Medical Devices (2016)

ANSI/AAMI/IEC 62304:2006 + A1:2015 – Medical device software – Software life cycle processes

Juran, J.M. – Juran’s Quality Handbook, 7th Edition, ASQ

El-Haik, B., & Roy, D.M. – Service Design for Six Sigma (sections on design for verification and validation discipline)