Predicate Device
This topic is part of the SG Systems Global medical device lifecycle, vigilance & regulatory compliance glossary.
Updated December 2025 • FDA 510(k) Clearance, 510(k) Submission, 510(k) vs PMA, FDA 510(k) Database, Predicate Rule, Design History File (DHF), Device Master Record (DMR), Verification & Validation (V&V), QMSR, Postmarket Surveillance, Data Integrity, CAPA, V5 QMS
A Predicate Device is the legally marketed device you cite in a US 510(k) submission to support a claim that your new device is substantially equivalent (SE) and therefore eligible for FDA 510(k) clearance.
In plain terms: the predicate is your “comparison anchor.” It defines what you are trying to be equivalent to in intended use and technological characteristics, and it shapes the test program you need to prove you didn’t introduce new questions of safety or effectiveness.
This is where many teams get the system backwards. A predicate device is not a marketing competitor you like, or a “close enough” product you found on a website.
It is a regulatory strategy decision with real consequences. Choose a strong predicate and your pathway becomes clearer: you can align indications, map technological characteristics, justify differences, and build a focused V&V plan.
Choose a weak predicate and your 510(k) becomes an argument you can’t win—because you can’t prove equivalence without inventing evidence after the fact.
“A predicate is not a shortcut. It’s a commitment: you are telling FDA exactly what you are comparable to, and then proving it with evidence.”
You typically need to show:
- Same intended use (or a safely justified, tightly scoped alignment),
- Same technological characteristics or different technological characteristics that do not raise new questions of safety/effectiveness, and
- Performance data (bench, software, biocompatibility, electrical safety, sterilization, usability, clinical where needed) that makes the equivalence defensible.
Your predicate choice drives your test plan, your labeling boundaries, your risk file, and ultimately whether your 510(k) review is “straightforward” or a months‑long cycle of requests, rework, and narrowing claims.
1) What a Predicate Device Actually Is
In the 510(k) framework, the predicate is the device you use to build the core regulatory claim: “My device is substantially equivalent to this legally marketed device.”
“Legally marketed” generally means the predicate is on the US market through an acceptable prior pathway (e.g., a cleared 510(k), certain preamendment devices, or other legally marketed status that FDA recognizes for predicate purposes).
The predicate is used to compare:
- Intended use: What the device is for, the clinical purpose, and the target users/patients/environment.
- Technological characteristics: Materials, design, energy source, principles of operation, software functions, key performance characteristics.
- Performance evidence: Whether testing demonstrates that differences do not create new safety/effectiveness questions.
If the predicate is the anchor, your submission is the chain: you connect your design inputs to predicate characteristics, then to verification/validation evidence, then to labeling claims—while maintaining traceability in your DHF.
2) Why Predicate Selection Is a Big Deal
Predicate selection is one of the few early decisions that can either simplify or destabilize your entire development program.
It impacts:
- Labeling and claims boundaries: If your intended use drifts beyond the predicate, you may have to narrow claims late or add costly evidence.
- Test scope and depth: A predicate with well-known characteristics can allow focused performance testing; a predicate mismatch can trigger broad “prove it” expectations.
- Risk management logic: Differences from the predicate must be evaluated in your risk file and justified with controls and verification (see V&V).
- Review friction: Weak predicates trigger questions, requests for additional information, and sometimes a strategic reset.
The hard truth: if your team can’t explain why your predicate is appropriate in one minute, you probably don’t have a predicate strategy—you have a guess.
3) “Substantial Equivalence” in Practice
The 510(k) is fundamentally a comparison exercise. FDA’s substantial equivalence logic is usually framed around two major questions:
- Does the device have the same intended use as the predicate?
- Does the device have the same technological characteristics? If not, do the differences raise new questions of safety or effectiveness?
“New questions” is where teams get trapped. It doesn’t necessarily mean the device is unsafe. It means your differences create unknowns that your evidence package doesn’t close.
The classic result is a request for more data—or a forced narrowing of claims until the unknowns disappear.
A high-quality predicate choice reduces “new questions” by aligning:
- patient population and clinical environment,
- core operating principle (mechanical, electrical, software logic),
- materials and patient-contact profile,
- energy source and hazards (heat, radiation, pressure, algorithmic classification),
- critical performance characteristics and failure modes.
4) How to Find Candidate Predicates (Without Fooling Yourself)
Most teams start with the FDA 510(k) Database to identify devices in the same product code and regulation number, then refine by intended use and technology.
The “fast but wrong” approach is picking the first device that looks similar and writing a comparison table.
The “slow but right” approach is building a predicate shortlist and stress-testing each candidate:
- Labeling match: Do the indications for use and limitations align with your intended use language?
- Technology match: Do key design features, materials, and operating principles align?
- Risk match: Are the major hazards comparable (and controllable) within the same test logic?
- Evidence availability: Can you reasonably obtain sufficient public information to support the comparison, or will you be forced to over‑test due to uncertainty?
You’re not trying to find the “closest marketing competitor.” You’re trying to find the predicate that makes a defensible regulatory argument with minimal ambiguity.
5) Primary Predicate vs Secondary Predicate: How Comparisons Actually Work
In many 510(k)s, manufacturers cite one primary predicate device and may cite additional devices (sometimes described as secondary predicates) to support specific feature comparisons.
The principle is simple: your substantial equivalence argument must still be coherent and not “assembled” from unrelated devices.
The pragmatic approach:
- Primary predicate: anchors intended use and the core technological characteristics.
- Additional comparisons: support specific components or features (materials, accessories, software modules) when appropriate and consistent.
If your comparison looks like you’re cherry-picking the best parts of multiple devices to justify your own, you’re likely to trigger questions.
The goal is not to “win” a debate; it’s to demonstrate that differences do not create new safety/effectiveness questions—and that your evidence closes the loop.
6) Multiple Predicates and the “Split Predicate” Trap
Teams often discover a harsh reality: no single predicate matches their intended use and their technology.
That’s where some submissions drift toward a “split predicate” pattern: one device matches intended use, another matches technology.
This is a trap because it can weaken the substantial equivalence logic:
- FDA needs one coherent equivalence story, not two partial stories.
- Splitting can be interpreted as evidence that your device is meaningfully novel in ways that create new questions.
- When you split, you often end up generating more data anyway—because the combined comparison has gaps.
If you are forced into this situation, the strategic response is usually one of these:
- Narrow your intended use until a single predicate fits,
- Adjust design choices to reduce technological differences, or
- Plan for a different pathway (e.g., a De Novo strategy) if you truly are novel and can’t close the gap credibly.
7) Predicate vs “Reference Device”: Don’t Mix the Concepts
A predicate device is a regulatory comparison anchor for 510(k) substantial equivalence.
A “reference device” is often used informally to mean a device you benchmark against, a standards reference, a test method reference, or a competitor you track.
In well-run development, you may have:
- Predicate device(s): for regulatory equivalence,
- Benchmark/reference devices: for competitive performance targets,
- Standards references: for test method selection and acceptance criteria rationales.
Keep these lines clean. If your engineering benchmarks drive design decisions that drift away from your predicate anchor, you can end up with a device that is “great” competitively but costly to clear.
8) What Makes a Predicate Defensible
A defensible predicate is not just “the right product code.” It has three qualities:
- Clear intended use alignment: your indications and user environment can be mapped without gymnastics.
- Comparable core technology: the primary mechanism and hazard profile are similar enough that the same types of testing answer the same questions.
- Explainable differences: any differences are either minor or supported by performance evidence that clearly closes risk questions.
A quick reality check: if your comparison table relies heavily on “N/A” or vague statements like “similar,” you’re not ready. You need measurable characteristics and testable claims.
Predicate defensibility also depends on good internal discipline:
- controlled intended use language,
- risk-based V&V planning,
- traceable requirements and test results,
- documented rationale captured in the DHF.
9) Data Expectations: What You Usually Need to Prove Equivalence
Predicate selection doesn’t remove the need for evidence—it focuses it. Typical data categories include:
- Bench performance: accuracy, strength, fatigue, flow, sealing, delivery performance, alarms, measurement performance—device-specific.
- Software evidence: requirements, hazard analysis, verification, validation, cybersecurity controls, release traceability.
- Biocompatibility: for patient-contacting materials (and changes from predicate matter).
- Electrical safety / EMC: especially when energy sources and user environments align with predicate categories.
- Sterilization / packaging: if sterile claims exist; changes can be “new questions” quickly.
- Usability/HFE: if user interface differences could drive use error risk.
- Clinical data: sometimes required when bench and analytical evidence cannot adequately address safety/effectiveness for the intended use.
The predicate influences how hard FDA pushes for clinical data. If your device is within well-understood technology boundaries and your differences are modest, bench evidence may be enough.
If your predicate is weak, or your technology introduces a new hazard profile, you may end up needing clinical evidence—or forced claim narrowing.
10) Predicate Choice Should Drive Design Controls (Not the Other Way Around)
In a mature development program, predicate strategy and design controls are linked from day one:
- Design inputs include intended use language aligned to the predicate.
- Design outputs capture the characteristics you must compare (materials, dimensions, energy limits, software functions).
- Verification plans are built to show equivalence and control differences.
- Validation confirms the device meets user needs within the claimed use environment.
This is why predicate strategy belongs inside the DHF decision trail.
If predicate selection is handled as an “RA task at the end,” the team usually discovers late that core requirements don’t align with the regulatory comparison story—and then the business pays for redesign or retesting.
11) Manufacturing and QMS Implications: “Equivalent Device” Still Needs Control
A 510(k) clearance is not a product license to operate without discipline. FDA’s expectations for quality systems (moving toward the QMSR) and lifecycle controls remain.
Predicate strategy intersects manufacturing and quality in practical ways:
- Material changes: if you change materials vs predicate assumptions, you need biocompatibility and performance rationales.
- Supplier shifts: can create performance drift that undermines “equivalence” in the field.
- Process capability: if your process can’t reliably hit key characteristics, your cleared design becomes hard to reproduce.
- Traceability: field issues require fast lot/device genealogy and documented manufacturing history.
This is where data integrity becomes operational survival. If your batch/device history can’t prove what was built, tested, and shipped, your ability to manage complaints and field actions collapses.
12) Software and Cybersecurity: Predicates Don’t “Cover” Your Code
For software-driven devices, predicate selection is necessary but not sufficient. FDA typically expects software documentation and testing proportional to risk. A predicate can help establish intended use and baseline functionality, but it does not eliminate the need to prove:
- software requirements and traceability to tests,
- hazard analysis and risk controls implemented in code,
- verification/validation evidence for algorithms and workflows,
- cybersecurity controls and secure update processes,
- change control discipline for postmarket releases.
Many “new questions” in modern 510(k)s come from software differences—especially connectivity, cloud interfaces, AI/ML logic, or integrations into clinical IT environments.
If those differences exist, predicate selection must be paired with a strong V&V strategy, or the submission becomes a prolonged debate.
13) Predicate Devices vs EU Equivalence: Similar Words, Different Game
Teams doing global product strategy often assume the EU has an equivalent “predicate device” concept. It doesn’t—at least not in the same way.
Under EU MDR, “equivalence” is typically discussed in the context of clinical evaluation, and it is heavily constrained.
EU market access is built around conformity to MDR requirements, technical documentation, clinical evaluation, and notified body assessment for many devices—not a 510(k)-style “substantial equivalence to a predicate” pathway.
Practical takeaway: don’t let US predicate logic drive your EU strategy blindly. You can align evidence collection for efficiency, but you must meet each region’s framework on its own terms.
14) Common Predicate Mistakes (And Their Costs)
These mistakes show up repeatedly in failed or delayed 510(k)s:
- Picking a predicate by appearance: “looks similar” is not a technological characteristics argument.
- Over-claiming intended use: broad claims that exceed predicate labeling trigger new questions and often force late claim narrowing.
- Ignoring hazard profile differences: changes in energy, materials, software logic, or use environment introduce risks that require data.
- Relying on thin public info: if you can’t describe predicate characteristics clearly, your comparison becomes vague and test scope expands.
- Letting design drift: engineering changes that make the product “better” but move it away from the predicate without updating the regulatory strategy.
- Weak traceability: inability to show how tests and results tie to the comparison claims and to risk controls.
The costs are predictable: additional testing, additional review cycles, delayed launch, and sometimes the need to restart with a different predicate and a narrowed claim set.
15) Implementation Roadmap: Building Predicate Strategy into the Program
A pragmatic, defensible approach looks like this:
- 1) Lock intended use language early. Draft “Indications for Use” text that is precise and enforceable.
- 2) Build a predicate shortlist. Use the FDA 510(k) Database to identify candidates in the correct product code/regulation, then narrow by labeling and technology.
- 3) Create a structured comparison matrix. Intended use, key specs, materials, energy, software functions, accessories, sterilization/packaging, performance characteristics.
- 4) Do a “new questions” risk workshop. For each difference, identify hazards, controls, and the exact data needed to close the gap.
- 5) Convert the matrix into a V&V plan. Don’t just describe differences—test them.
- 6) Maintain predicate alignment under change control. If design changes accumulate, reassess the predicate fit before you drift into a different device.
- 7) Prepare the submission narrative early. Your comparison logic should exist months before submission, not weeks.
The goal is simple: the predicate should be a stable reference point from concept through submission, not a late-stage scramble.
16) What Predicate Device Means for V5
Predicate strategy succeeds or fails based on one thing: traceable evidence.
Not just test reports, but the ability to show—cleanly—how intended use, requirements, risks, verification/validation, labeling, and postmarket learnings connect.
On the V5 platform, the predicate-driven development and compliance story becomes easier to run and easier to defend:
- V5 QMS: ties document control, design controls, risk records, CAPA, complaints, audits, and change control into a single, auditable system with controlled approvals and audit trails.
- Traceability that auditors can follow: from predicate comparison claims to V&V evidence to released labeling—without spreadsheet archaeology.
- Change control discipline: ensures design drift doesn’t quietly break predicate alignment or invalidate the comparison story.
- Postmarket feedback loops: links complaint trends and PMS signals back into risk management and design updates so your “equivalent” device remains controlled over time.
Net effect: predicate device strategy stops being a fragile narrative and becomes a controlled lifecycle system—exactly what regulators expect when they ask, “Show me how you know your device is what you say it is.”
FAQ
Q1. Is a predicate device just a competitor product?
No. A predicate device is a legally marketed device used in a 510(k) substantial equivalence argument. Market competitors can be benchmarks, but predicate selection is a regulatory decision based on intended use and technological characteristics—not branding or sales position.
Q2. Can we use multiple predicates in one 510(k)?
Sometimes. You can cite more than one device to support comparisons, but your equivalence argument must remain coherent. If you “split” intended use and technology across different predicates, you risk weakening the submission and creating new questions that require more data or a different pathway.
Q3. What makes a predicate “bad”?
A bad predicate is one you can’t defend: intended use doesn’t align, technology differs in ways that introduce new hazards, or public information is too thin to support a clear comparison. The result is usually more testing, more questions, and often claim narrowing.
Q4. Does a predicate remove the need for clinical data?
Not automatically. A strong predicate can reduce the likelihood of clinical studies if bench and analytical evidence can close the safety/effectiveness questions. But if differences introduce new questions that can’t be answered with non-clinical data, clinical evidence may still be needed.
Q5. What’s the fastest practical way to improve predicate strategy?
Build a structured predicate comparison matrix early and convert every meaningful difference into a risk-based V&V plan. Then keep predicate alignment under change control so your device doesn’t drift away from the comparison you’re trying to prove.
Related Reading
• 510(k) Pathway: FDA 510(k) Clearance | 510(k) Submission | 510(k) vs PMA | FDA 510(k) Database
• Design & Evidence: Design History File (DHF) | Device Master Record (DMR) | Verification & Validation (V&V)
• Quality & Lifecycle: QMSR | Postmarket Surveillance | CAPA | Data Integrity
• V5 Platform: V5 QMS | V5 Solution Overview | V5 Connect API
OUR SOLUTIONS
Three Systems. One Seamless Experience.
Explore how V5 MES, QMS, and WMS work together to digitize production, automate compliance, and track inventory — all without the paperwork.

Manufacturing Execution System (MES)
Control every batch, every step.
Direct every batch, blend, and product with live workflows, spec enforcement, deviation tracking, and batch review—no clipboards needed.
- Faster batch cycles
- Error-proof production
- Full electronic traceability

Quality Management System (QMS)
Enforce quality, not paperwork.
Capture every SOP, check, and audit with real-time compliance, deviation control, CAPA workflows, and digital signatures—no binders needed.
- 100% paperless compliance
- Instant deviation alerts
- Audit-ready, always

Warehouse Management System (WMS)
Inventory you can trust.
Track every bag, batch, and pallet with live inventory, allergen segregation, expiry control, and automated labeling—no spreadsheets.
- Full lot and expiry traceability
- FEFO/FIFO enforced
- Real-time stock accuracy
You're in great company
How can we help you today?
We’re ready when you are.
Choose your path below — whether you're looking for a free trial, a live demo, or a customized setup, our team will guide you through every step.
Let’s get started — fill out the quick form below.






























