Design History File (DHF)Glossary

Design History File (DHF)

This topic is part of the SG Systems Global regulatory & operations guide library.

Design History File (DHF): build inspection-ready design evidence—requirements, risk, V&V, transfer, and changes.

Updated Jan 2026 • design history file dhf, design controls, traceability matrix, risk management, verification & validation, design transfer • Medical devices

A Design History File (DHF) is the controlled collection of records that proves a medical device was designed under disciplined design controls—requirements defined, risks assessed, design decisions reviewed, and verification/validation completed—before the design was released. It is not a “folder of PDFs.” It is the evidence chain that lets a regulator, auditor, or internal reviewer understand what you intended to build, what you actually built, how you proved it, and how you controlled changes.

A DHF sits at the center of design controls under 21 CFR Part 820 and design and development expectations under ISO 13485. In the EU, the same intent is expressed through technical documentation under EU MDR 2017/745, often reviewed by a notified body as part of CE marking. Different filing structure, same question: can you prove controlled requirements, risk controls, and objective evidence for the released design?

Most DHFs fail for a simple reason: teams treat “design evidence” as a byproduct rather than a deliverable. Design outputs get generated, test reports accumulate, and decisions happen in meetings—but the connective tissue (traceability, review records, rationale, and change linkage) is missing. That’s when a DHF becomes a scramble: assembling artifacts after the fact, trying to recreate why decisions were made and whether tests actually covered the design intent.

“If you can’t trace it, you can’t defend it.”

TL;DR: Design History File (DHF) is the evidence backbone for medical device design controls. A defensible DHF is governed by document control and change control, maintains bidirectional traceability from user needs and design inputs to design outputs, ISO 14971 risk controls, and Verification & Validation (V&V) evidence. It should show design reviews and action closure, and prove design transfer into the DMR, with record expectations flowing into the DHR or eDHR. If you can’t quickly demonstrate “what requirement was tested, what risk control was verified, and what version was released,” your DHF is not inspection-ready.

1) What a DHF actually means

A DHF is the end-to-end proof that the design was controlled. That proof typically includes:

  • Intent: who the user is, what problem you solve, and what “success” looks like (often captured as URS or equivalent user needs).
  • Inputs: design requirements, standards, performance specs, and constraints.
  • Outputs: drawings, specifications, software builds, labeling, and instructions.
  • Reviews: documented design reviews and action closure.
  • Risk controls: risk analysis and control measures tied to the design (ISO 14971 alignment).
  • V&V evidence: protocols and results proving requirements were met (V&V).
  • Transfer evidence: proof the released design was handed off into controlled manufacturing records (DMR).

A DHF is not “one document.” It is a controlled system of records. The practical goal is simple: a competent reviewer can follow the design story without guessing, without gaps, and without relying on individual memory.

2) Why DHF discipline is non-negotiable

There are four forcing functions that make DHF quality real:

Regulatory resilience
Inspections expect clear evidence of design controls and risk management.
Release resilience
QA needs traceable evidence to release design changes without guesswork.
Investigation resilience
Complaints and CAPA require traceability from field issues back to design intent.
Knowledge resilience
Teams turn over; the DHF preserves rationale and evidence across years.

The hard truth: a weak DHF does not only risk an audit finding. It creates operational drag. Engineers re-run tests because old evidence is unusable. QA blocks releases because the risk file is stale. Regulatory can’t answer questions from a notified body. Manufacturing builds “what worked last time,” and the as-built device drifts away from the validated design. Small documentation weaknesses become regulatory delays, long investigations, and avoidable product risk.

DHF discipline also protects you from “quiet failure.” A device can appear stable in production while slowly accumulating undocumented changes: supplier swaps, software patches, labeling updates, test method drift, and informal spec tweaks. Over time, the product you ship is not the product you proved. A strong DHF, tied tightly to change control, is how you prevent that drift.

3) DHF vs DMR vs DHR

Teams mix these up. Confusion creates missing evidence and wrong retention practices.

ArtifactPrimary purposeWhat it proves
DHFDesign control evidenceThe design was developed, reviewed, risk-managed, and verified/validated
DMRManufacturing instructions and specsHow to build the device consistently (DMR)
DHR / eDHRProduction history per unit or lotWhat was actually built and released (DHR, eDHR)

Think of the chain reviewers walk: DHF proves the design; DMR describes the approved build; DHR proves the build happened as approved. If you can’t keep those concepts separate, your evidence will be scattered and fragile.

One practical self-check: pick a single high-risk requirement and run the trace chain end-to-end. You should be able to pull the approved design input, the linked risk control, the design output that implements it, the verification protocol, the raw results, the deviation (if any), and the final acceptance decision. If any link lives only in someone’s memory, it is not controlled evidence during an audit.

In practice, the most defensible approach is to treat DHF as a controlled “story” and treat DMR/DHR as controlled “execution.” If your DHF is mostly manufacturing travelers and inspection records, you are proving you built something—not that you designed the right thing under control.

4) Define scope: what belongs in the DHF

Scope is where DHFs either become strong or become chaotic. The file needs to include enough evidence to prove control, but not so much unrelated operational noise that review becomes impossible.

A practical DHF scope map typically includes:

  • User needs / intended use: what the device is for, who uses it, and the use environment (often connected to IFU content).
  • Design inputs: requirements, standards, performance needs, safety constraints, and labeling requirements.
  • Design outputs: drawings, BOMs, specifications, software requirements and builds, labeling/artwork, and acceptance criteria.
  • Design reviews: review agendas, attendees, issues, decisions, and action closure (kept under document control).
  • Risk management: hazard analysis, risk estimates, risk controls, and residual risk acceptability aligned to ISO 14971.
  • Verification evidence: tests, inspections, analyses, and results proving design inputs were met.
  • Validation evidence: evidence the device meets user needs and intended use (often includes clinical, usability, and simulated-use where applicable).
  • Design transfer evidence: proof the released design was transferred into the DMR.
  • Design changes: change requests, rationales, impact assessments, and updated V&V results tied to change control.
Tell-it-like-it-is: If your DHF is a collection of files with no clear traceability spine, it is not a DHF. It’s storage.

5) Traceability: the DHF’s “spine”

Traceability is how a DHF stops being a pile of artifacts and becomes defensible evidence. At minimum, you need bidirectional traceability that supports these questions:

  • Forward trace: for each user need and design input, what design outputs implement it, and what verification proves it?
  • Backward trace: for each design output and test, what requirement or risk control does it support?
  • Risk trace: for each identified hazard, what risk control was implemented, and what evidence proves it works?

Traceability is often operationalized through a traceability matrix. That matrix is only as good as its governance: versioned, reviewed, and updated with every meaningful change. Treat the matrix as a controlled artifact under revision control, not as an engineer’s private spreadsheet.

Traceability is also the fastest way to expose gaps. If you can’t map each requirement to objective evidence, either the requirement is undefined, the test plan is weak, or the evidence is missing. All three are problems—and the DHF is where those problems must be visible and resolvable.

6) Governance: document control, reviews, and evidence

A DHF is a quality record. That means it must be governed like one. You don’t need bureaucracy for its own sake, but you do need discipline that is auditable and repeatable.

Core governance anchors:

  • Document control: controlled templates, approvals, effective dates, and current versions.
  • Revision control: versioning for requirements, specs, drawings, software baselines, and traceability matrices.
  • Change control: clear impact assessment and evidence updates when design changes occur.
  • Design review discipline: planned reviews with objective inputs and documented outcomes; action closure is part of the record, not a side conversation.
  • Auditability: evidence must be retrievable and interpretable years later, including raw data where needed for credibility.
  • Quality system alignment: DHF governance must align with your QMS and, where applicable, QMSR.

For electronic records, governance must align with 21 CFR Part 11 and Annex 11 expectations: identity, access control, audit trails, record retention, and retrieval. The simplest operating rule: if a record can influence safety, performance, or compliance, it must be protected like a controlled quality record, because that is what it is.

7) Risk management integration (ISO 14971)

A DHF that ignores risk management is incomplete. In modern device reviews, risk is not a separate binder. It is part of the design logic.

At minimum, DHF content should show:

  • the hazard analysis and risk evaluation approach (aligned to ISO 14971 Medical Device Risk Management)
  • risk controls selected, implemented, and verified
  • traceability from hazard → control measure → design output → verification evidence
  • residual risk evaluation and acceptability rationale

Most risk integration failures are traceability failures. Teams write risk controls that are not mapped to a concrete design output, or they run tests that don’t explicitly claim which risk control they are verifying. If the record cannot show that a control exists and works, reviewers will assume it doesn’t.

Practical rule

If a risk control is “procedural” (training, warnings, labeling), your DHF must show it was designed, reviewed, and validated as part of intended use—not treated as an afterthought.

8) Verification vs validation: what inspectors look for

Verification & Validation is where DHFs either become easy to defend or become painful. Reviewers look for clear separation and clear linkage:

ConceptQuestion answeredTypical evidence
VerificationDid we build it right?Bench testing, inspections, analyses, software unit/integration tests
ValidationDid we build the right thing?Simulated use, usability/HFE, clinical/field evidence, system-level validation

A common inspection problem is “test evidence without requirement linkage.” Reports exist, but they are not explicitly tied to design inputs or risk controls. Another problem is “requirements without objective acceptance criteria.” Requirements that read like marketing claims cannot be verified. This is why requirements quality is a DHF issue, not just an engineering issue.

If your device involves usability or user interface risk, link DHF validation evidence to Human Factors Engineering (HFE) work. If you ignore usability risk in the DHF, you create a blind spot that shows up later as field errors and complaint trends.

9) Design transfer and the handoff to manufacturing

Design transfer is where DHF evidence must connect to controlled production reality. “We finished design” is meaningless if manufacturing cannot build the design as verified and validated.

A defensible design transfer story typically includes:

  • final released design outputs and specifications
  • manufacturing and inspection instructions placed under the DMR
  • process validation decisions and rationales (where applicable)
  • supplier requirements and controls aligned to supplier quality management
  • linkage to production evidence expectations in the DHR or eDHR

Transfer failures tend to show up as nonconformances and complaints: wrong acceptance tests, ambiguous work instructions, missing inspection steps, and “tribal knowledge” builds. Those are not manufacturing-only problems. They are DHF problems because the DHF owns the evidence that the released design is buildable as intended.

10) Change control and design evolution

Devices evolve. Your DHF must evolve with them. The core rule is simple: if a change impacts requirements, risk, labeling, performance, or safety, it must be controlled and traceable.

Minimum change control expectations in a DHF context:

  • Impact assessment: what requirements, risk controls, or validation claims might change?
  • Evidence update: what verification or validation must be re-run?
  • Trace update: update traceability matrices and risk linkages.
  • Approval record: documented approvals under change control.

Change control is where DHFs often go stale. The team updates the design output and releases it, but the trace matrix is not updated, the risk file isn’t revisited, and the test plan stays on last year’s assumptions. That creates a DHF that “looks complete” but is not true for the current design.

11) Software and SaMD realities

For software-heavy devices and IEC 62304 governed software development, DHF expectations do not disappear—they intensify. Reviewers want to see controlled baselines, traceability from software requirements to tests, and disciplined configuration management.

A practical DHF approach for software-driven products includes:

  • software requirements and architecture under revision control
  • traceability from requirements to test cases and results
  • cybersecurity-related requirements and verification evidence where applicable
  • release notes tied to design changes and risk impacts
  • evidence that the released software baseline matches what was verified and validated

If your software release process allows “quick patches” without impact assessment and trace updates, your DHF will become untrustworthy fast. For software, the DHF must make the release story explicit: what changed, why, what risks were considered, and what evidence supports release.

12) Feedback loops: complaints and postmarket signals

A DHF is not frozen forever. Postmarket evidence is often the trigger for design changes. If your field performance is monitored through postmarket surveillance and customer complaint handling, then DHF governance must support rapid trace-based investigation:

  • what requirement or risk control is implicated?
  • what design output implements it?
  • what evidence originally supported it?
  • what change is needed, and what V&V must be repeated?

This is where DHF quality pays off. When a complaint arrives, you should be able to “walk the trace” quickly and determine whether the issue is a design gap, a manufacturing escape, a labeling problem, or a use error. Without traceability, investigations devolve into opinion and delay.

13) What to retain: the DHF evidence pack

A DHF is only defensible if you can produce evidence on demand. Retention must align with your record retention strategy, and electronic records must be governed for integrity.

Recommended DHF evidence pack contents:

  • DHF index or structure map: how the DHF is organized and where key records live.
  • User needs and design inputs: controlled versions and approval records.
  • Design outputs: released drawings/specs/software baselines and their revisions.
  • Traceability matrix: bidirectional trace, versioned, reviewed.
  • Risk management file linkage: ISO 14971 artifacts and risk-control trace.
  • Verification and validation evidence: protocols, results, and deviations.
  • Design review records: agendas, attendees, minutes, actions, and closure.
  • Design transfer evidence: release package and DMR handoff proof.
  • Change records: change requests, impact assessments, approvals, and re-test evidence.

Also retain DHF access and modification history in a way that supports investigation. This is where audit trail expectations and data integrity become practical. If you cannot prove who changed what and when, you weaken the credibility of the record.

14) KPIs and operating cadence

A DHF should not be a once-a-year compliance ritual. It should be an operating control with measurable outcomes.

Trace completeness
Percent of requirements mapped to objective verification/validation evidence.
Risk-control coverage
Percent of risk controls mapped to implemented outputs and verified evidence.
Change closure time
Time from change request to fully updated DHF evidence chain.
Review action closure
Percent of design review actions closed on time with evidence.
Audit readiness
Time to retrieve key DHF evidence upon request (hours, not weeks).
Investigation trace speed
Time to map a complaint to implicated requirements, risks, and evidence.

Cadence should reflect risk and change rate. Complex devices, frequent software releases, multiple suppliers, and high complaint exposure require more frequent DHF health checks than stable, low-change products. If you are moving fast, your DHF governance must move fast too—or it becomes fiction.

15) The “inspection-ready” DHF block test

If you want a fast go/no-go check, use a DHF block test. The goal is to prove that the record chain is intact, not to admire your document templates.

DHF Block Test (Fast Proof)

  1. Pick one high-risk requirement: confirm the approved input exists and is current.
  2. Trace to design output: show the output version that implements the requirement.
  3. Trace to risk control: show the linked hazard and control measure (ISO 14971 linkage).
  4. Trace to verification evidence: show protocol, raw results, deviations, and acceptance decision.
  5. Validation linkage: show how intended use/user needs were validated where applicable.
  6. Design review evidence: show that key decisions were reviewed and actions closed.
  7. Change record check: if the item changed, show impact assessment and evidence updates.
  8. Transfer check: show the released output is reflected in the DMR.

If you cannot complete this block test quickly, your DHF might still contain “many documents,” but it is not functioning as a control. Treat failure as a quality exception and correct it.

16) Common failure patterns

  • Evidence exists but isn’t linked. Test reports are present, but requirements and risk controls are not traceable to them.
  • Requirements are not verifiable. Inputs read like marketing language with no acceptance criteria.
  • Risk is separate and stale. The ISO 14971 file isn’t updated when design changes occur.
  • Design reviews are informal. Decisions happen, but review records don’t show rationale or closure.
  • Change control updates outputs, not the story. Specs change, but traceability and evidence packs don’t.
  • Transfer is weak. Manufacturing builds with tribal knowledge; DMR doesn’t reflect the validated design.
  • Electronic record integrity is ignored. No clear audit trail, access governance, or retrieval discipline.
  • DHF is compiled at the end. “We’ll assemble it before submission” is how gaps become permanent.

17) Cross-industry notes: IVD, implants, and combination products

DHF fundamentals are universal, but emphasis varies by device type:

  • IVD devices: link design inputs to analytical performance evidence and traceability to labeling/claims; be ready for regulatory reporting expectations like 21 CFR Part 803.
  • Implants: materials, biocompatibility rationale, and long-term risk controls must be traceable and supported by objective evidence.
  • Combination products: design evidence must bridge device controls and relevant drug/biologic expectations (for example, controls influenced by 21 CFR Part 4).
  • UDI / labeling-controlled products: ensure traceability from labeling requirements to controlled outputs and production records such as 21 CFR Part 830.
  • Field correction readiness: DHF change discipline supports rapid, defensible action when required (see 21 CFR Part 806 context).

The common lesson: the DHF must make the released design defensible, and it must make change defensible when the real world forces evolution.


18) Extended FAQ

Q1. What is a Design History File (DHF)?
A DHF is the controlled collection of records proving a medical device was designed under disciplined design controls, including requirements, risk management, design reviews, and verification/validation evidence.

Q2. Is a DHF the same as the DMR?
No. A DHF proves the design was controlled; the DMR defines how to manufacture the released design.

Q3. What is the biggest DHF weakness in audits?
Missing traceability: requirements and risk controls are not clearly linked to objective verification/validation evidence.

Q4. How does ISO 14971 relate to the DHF?
The DHF should demonstrate that risk controls from ISO 14971 are implemented in design outputs and verified with objective evidence.

Q5. When should a DHF be “assembled”?
Continuously. If you assemble the DHF at the end, you’re admitting it wasn’t operating as a control during development.


Related Reading
• Design Controls: Medical Device Design | 21 CFR Part 820 | ISO 13485 | Verification & Validation (V&V)
• Risk + Usability: ISO 14971 | Human Factors Engineering (HFE) | Postmarket Surveillance | Complaint Handling
• Records Chain: DMR | DHR | eDHR | Instructions for Use (IFU)
• Governance: Document Control | Revision Control | Change Control | CAPA | Risk Matrix | Internal Audit
• E-Records Context: 21 CFR Part 11 | Annex 11 | Audit Trail (GxP) | Data Integrity | Record Retention
• EU Context: EU MDR 2017/745 | CE Marking | Notified Body


OUR SOLUTIONS

Three Systems. One Seamless Experience.

Explore how V5 MES, QMS, and WMS work together to digitize production, automate compliance, and track inventory — all without the paperwork.

Manufacturing Execution System (MES)

Control every batch, every step.

Direct every batch, blend, and product with live workflows, spec enforcement, deviation tracking, and batch review—no clipboards needed.

  • Faster batch cycles
  • Error-proof production
  • Full electronic traceability
LEARN MORE

Quality Management System (QMS)

Enforce quality, not paperwork.

Capture every SOP, check, and audit with real-time compliance, deviation control, CAPA workflows, and digital signatures—no binders needed.

  • 100% paperless compliance
  • Instant deviation alerts
  • Audit-ready, always
Learn More

Warehouse Management System (WMS)

Inventory you can trust.

Track every bag, batch, and pallet with live inventory, allergen segregation, expiry control, and automated labeling—no spreadsheets.

  • Full lot and expiry traceability
  • FEFO/FIFO enforced
  • Real-time stock accuracy
Learn More

You're in great company

  • How can we help you today?

    We’re ready when you are.
    Choose your path below — whether you're looking for a free trial, a live demo, or a customized setup, our team will guide you through every step.
    Let’s get started — fill out the quick form below.