Supervisory Control & Data Acquisition (SCADA)Glossary

Supervisory Control & Data Acquisition (SCADA) – OT Control Layer, Not Your QMS

This topic is part of the SG Systems Global regulatory & operations glossary.

Updated October 2025 • OT/ICS, Data Integrity, Part 11/Annex 11, Cybersecurity • Manufacturing, QA/IT, Engineering

SCADA is the supervisory nervous system of the plant floor: it acquires time‑stamped process values from PLCs/RTUs, visualizes them through the HMI, manages alarms and events, and—where authorized—writes setpoints and recipes back to equipment. It’s fast, it’s operational, and it’s essential for safe production. But here’s the line that too many organizations blur: SCADA is not your quality management system and it’s not your batch record. SCADA generates evidence; MES and eBMR consume it to make releasable decisions. If you try to make SCADA carry your regulated record burden without proper governance, you’ll end up with “data” you can’t defend and product you can’t release. Treat SCADA as an OT control layer that must still meet Data Integrity requirements and the expectations of 21 CFR Part 11 and Annex 11 when it supports GxP decisions. If you don’t, inspectors will find the gaps quickly: shared logins, missing audit trails, unsynchronized time, unvalidated interfaces, and spreadsheets pretending to be systems.

“SCADA moves equipment; MES moves evidence. Mix them up and you’ll ship on hope, not proof.”

TL;DR: SCADA is the control/visibility layer for equipment and utilities. In regulated plants it’s a record generator, not the record of truth. Lock down users with UAM, enable audit trails, validate per GAMP 5/CSV, time‑sync historians, and integrate cleanly with MES/LIMS. No shared logins, no offline edits, no spreadsheet archaeology.

1) What SCADA Covers—and What It Does Not

Covers: supervisory control over unit operations and utilities, alarm/event detection and handling, time‑series data collection to a historian, recipe and setpoint governance (with proper approvals), operator guidance through the HMI, and standardized interfaces to PLCs/RTUs. In practice this means monitoring temperatures, pressures, flows, levels, valve states, motor speeds, CIP/SIP cycles, environmental conditions, and facility services such as purified water, HVAC, and compressed air. Properly engineered SCADA exposes context—unit, batch, mode, version—so a trend is more than just numbers. It also separates operator actions from automatic control logic and captures who changed what and why, reducing tribal knowledge and “mystery setpoints.”

Does not cover: formal batch records, device history records, QA disposition, label content control, or regulatory submissions. Those belong to eBMR/DHR, Release Status workflows, and controlled labeling systems. You may visualize quality‑critical data in SCADA, but the decision to release or reject must reside in validated, reviewable systems with the right approval semantics and data lineage. If your SCADA screens are your “batch record,” you are one audit away from a shutdown.

2) Legal, System, and Data Integrity Anchors

Any SCADA data used to demonstrate compliance must satisfy ALCOA(+): attributable (named user accounts, not “Operator1”), legible (readable units, engineering context), contemporaneous (real‑time capture with clock sync), original (stored in a protected historian with change control), and accurate (verified I/O, calibrated instruments, validated calculations). Wrap that with Part 11/Annex 11 controls: unique users with role‑based permissions guided by UAM, time‑stamped audit trails for configuration and operator actions, secure e‑signatures where approval/meaning is required, and durable retention with successful restore tests. None of this is optional if SCADA contributes to release decisions or continued process verification (CPV); and pretending otherwise is a liability.

3) The Evidence Pack for SCADA in GxP

A credible SCADA implementation stands on a documented chain from user intent to qualified operation. Start with a URS that states what the system must do and, crucially, what it will not do (e.g., “SCADA will not serve as the system of record for batch release”). Flow that into functional and design specifications; capture risk and control rationales under GAMP 5. The VMP and V&V plan define how you will verify data acquisition accuracy, alarm behaviors, user management, audit trails, backup/restore, historian integrity, and interface mappings to MES/LIMS/ELN. Qualify infrastructure and software through IQ/OQ/PQ, including negative tests (e.g., unauthorized role cannot change a recipe; audit trail cannot be disabled without detection). Finally, lock it with SOPs covering configuration management, audit‑trail review, user provisioning, patching, and incident response. If any of those artifacts are missing, your “validated” claim is paper‑thin.

4) From Sensor to Release—A Standard Path

Data flows from the physical world to release decisions through a series of transformations that either preserve integrity or corrupt it. The sane path is: sensor → PLC/RTU → SCADA/HMI → historian → governed interface → MES/eBMR → QA decision in Release Status. At each hop, define units, context (unit/batch/step), engineering limits, and handling for bad data (out‑of‑status instrument, comms loss, time drift). If prerequisites fail—unsynchronized clocks, disabled audit trail, historian gaps, or a device out of calibration—block consumption of that data by MES and open a controlled Deviation/CAPA before proceeding. Anything less is hoping no one asks to see the chain of custody from sensor to release.

5) Alarms, Interlocks & Limits

Alarms are not décor; they are engineered safeguards. Build an alarm philosophy that defines severities, deadbands, delays, and escalation. Rationalize alarms so operators see what matters and can act; eliminate floods that numb response. Quality‑critical parameters should mirror equipment setpoints with alert/action limits and be charted with SPC. Tie repeated excursions to formal investigation using RCA, and implement hard interlocks only where risk justifies them and where recovery is unambiguous. Record operator acknowledgements with identity and reason, and make alarm response part of on‑the‑job training. If your alarm list is three screens long and half are stale, you don’t have control—you have noise.

6) Data Acquisition, Time Sync & Historian

Sampling rates must be chosen by risk and physics, not convenience. Fast dynamics require higher rates; slow utilities can be polled. Capture raw and engineered values with units, decimal precision, and good metadata: tag owner, calibration interval, minimum/maximum ranges, and whether the tag is product‑impacting. Enforce plant‑wide time synchronization; if clocks drift, your audit trails and trends become suspect. Quantify historian compression and prove that meaningful variation survives—blindly cranking compression to save disk space can erase the very evidence you need to show control. Validate backfill logic for comms outages and record data lineage into eBMR and CPV analytics. Backups without routine restore tests are fiction; schedule and document restores as part of Internal Audit checklists.

7) Access & Security (OT/IT)

Cybersecurity is a quality problem in slow motion. Enforce least‑privilege UAM with named accounts for every operator and admin; ban shared logins. Log all admin actions and review them with the same rigor as deviations. Control remote access and segment networks so a single compromised node doesn’t expose your historian and batch data. Patch what you can under Change Control with risk‑based testing; document compensating controls where patching is not feasible due to vendor constraints. Tie account lifecycle to HR onboarding/offboarding, and explicitly test that disabled accounts cannot authenticate. Regulatory inspectors increasingly ask for this; if you fail, they will question the integrity of every downstream record your SCADA produced.

8) Infrastructure, Redundancy & Lifecycle

SCADA architecture should be boring in the best way: redundant servers or VMs, failover tested under load, mission‑critical storage with monitored health, and documented disaster recovery. Keep platform versions within vendor support windows and plan lifecycle transitions months in advance; the last week of support is not the day to discover a driver incompatibility with your historian. Include edge gateways where appropriate to isolate PLC networks while providing clean data to enterprise layers. All of this belongs under controlled documents and receives verification during OQ and performance checks during PQ or PPQ as applicable. If your recovery plan is “reboot it and hope,” you’re gambling with batch continuity and data loss.

9) Integrations: MES, LIMS/ELN, WMS

Integrations are where data integrity goes to live—or die. Use governed interfaces to MES, LIMS/ELN, and WMS that include unit mapping, batch context, engineering units conversion, and retry/queuing with tamper‑evident logs. Validate that what SCADA stores is what MES receives—no silent truncation, rounding, or time‑zone surprises. Kill manual re‑keying of critical values into eBMR; humans are great at judgment and terrible at transcription. When integrations fail, block release and open a documented path to reconstruct evidence or re‑test as risk requires. Interfaces are not “set and forget”; they drift if no one owns them.

10) Utilities & Facility Monitoring

SCADA often runs the quiet systems that ruin batches when neglected: purified water, clean steam, HVAC, cold rooms, compressed gases. Integrate SCADA with Utilities Qualification (UQ), Environmental Monitoring, and Temperature Mapping programs so deviations in environmental conditions route straight into QA triage with required evidence attached. Validate utility sensor tags like any other GxP instrument, including calibration and minimum detectable shift. If you rely on utility trends to demonstrate compliance but can’t show calibration status or time sync, expect a finding and potential batch impact assessment across everything that utility touched.

11) Calibration & Out‑of‑Status Behavior

Each SCADA tag that feeds a GxP decision should be bound to a physical instrument with a unique ID and an active calibration status. When an instrument goes out of status, downstream decisions using that data must be blocked or flagged for impact assessment. Build logic and SOPs that tag historical values as “suspect” during the out‑of‑status window and require QA review before use. Store calibration certificates—and failures—under controlled records so the chain from data to device is auditable. If this sounds heavy, it’s cheaper than explaining to a regulator why you released product on unverified sensors.

12) Validation Across the Product Lifecycle

Link SCADA to the product lifecycle explicitly. During development, use SCADA to learn process dynamics and identify critical parameters; during tech transfer, freeze tag definitions and units; during PPQ, demonstrate that the automation and historian reliably capture required evidence; and during commercial operations, maintain control through CPV. Use SPC and Cp/Cpk to quantify headroom and set intervention thresholds that prevent scrap and rework. Treat recipe and setpoint changes as validated changes with impact assessment, not as “tweaks.” If you shortcut this, you’re managing by anecdotes, not evidence.

13) Operator Interfaces That Reduce Error

HMI design determines whether operators succeed or fail. Use clear units, high‑contrast trends, unambiguous states, and consistent navigation. Present alarms with plain‑language messages and next‑step guidance tied to SOPs. Prevent data entry where selection is safer; where entry is necessary, enforce ranges and reasons. Display batch context and approval state so no one is guessing whether a step is authorized. Link to controlled work instructions and checklists stored under Document Control. Good HMI reduces training load and error rates; bad HMI produces deviations you could have prevented with basic design sense.

14) Data Integrity—Trust the Numbers or Don’t Ship

Every SCADA entry that matters must be reconstructable without heroics. That means immutable audit trails for configuration and operator actions, identity‑linked approvals with meaning, and attachments (photos, chromatograms, calculations) where they explain context. Electronic records must be retained in formats you can still read in five years, with migration plans and restore tests proving both integrity and usability. If you have to export CSVs to prove anything, you’ve already lost traceability. Build reports that pull from the historian and show evidence inline with signatures and time context. Data integrity is not a policy; it’s a daily discipline.

15) Metrics That Demonstrate Control

  • Alarm Health: floods per shift, stale alarms, mean response/clear time, % with documented response per SOP.
  • Historian Integrity: time‑sync drift incidents, data loss rate, backfill success, compression error vs. lab anchors.
  • Access Hygiene: shared‑login rate (target zero), orphan accounts, admin action reviews closed on time.
  • Change/Patch Compliance: changes executed under Change Control, validated, and linked to risk; overdue patches with compensating controls.
  • OT Uptime: availability of critical nodes correlated to product impact and OEE.
  • Integration Quality: interface failures, retries, and reconciliation time to restore end‑to‑end data flow into eBMR.

If a metric isn’t driving a control decision or improvement action, delete it. Dashboards are not scrapbooks.

16) Common Pitfalls & How to Avoid Them

  • Using SCADA as the batch record. Route values into eBMR; don’t pretend trends and screenshots are release evidence.
  • Shared operator logins. Violates UAM and destroys attribution. Enforce named accounts and session locking.
  • No time synchronization. Unverifiable sequences make audit trails meaningless. Monitor NTP/clock drift and alarm on failure.
  • Unvalidated integrations. Manual re‑keys and file drops with no reconciliation corrupt records. Govern interfaces and test mapping on every change.
  • Poor backup/restore discipline. Backups you never restore are wishful thinking. Schedule restores and document results.
  • Change outside governance. All edits under Change Control with impact assessment, testing, and versioned release notes.
  • Ignoring metrology. Out‑of‑status instruments poison data. Bind tags to calibration and block use on lapse.
  • Alarm overload. Floods hide risk. Rationalize and measure operator load.

17) What Belongs in the SCADA Dossier

Maintain a dossier under Document Control that inspectors can navigate without a tour guide: architecture and network diagrams; tag list with ownership, units, ranges, and criticality; alarm philosophy and current catalog; user/role matrix and provisioning SOP; audit‑trail model and review evidence; historian retention/restore strategy with last successful restore date; interface specifications and mapping; validation package (V&V, IQ/OQ/PQ); cybersecurity plan; patch/exception log; training records; and recent deviation/CAPA closures linked to SCADA. If that sounds like a lot, it is. It’s also the price of having SCADA data be credible when your product is on the line.

18) How This Fits with V5 by SG Systems Global

Clean Integration. The V5 platform ingests SCADA tags and alarms into the MES context with unit mapping, unit‑of‑measure enforcement, and reason‑coded exceptions before values are bound to steps in the eBMR. The result is a single source of truth for release evidence without duplicate entry.

Execution & Interlocks. If critical prerequisites fail—audit trail off, time not synchronized, instrument out of calibration status, or a historian gap—V5 blocks execution, raises a hold in Release Status, and launches guided remediation tied to Deviation/CAPA. Operators get steps to fix the issue; QA gets the evidence to sign off.

Analytics & CPV. V5 dashboards combine historian trends with SPC and CPV, estimating risk to quality in real time and surfacing drift before it becomes scrap or a complaint. You see exactly which tags, units, and shifts are driving CAPAs, and you can prove savings in reduced deviations and stabilized OEE.

Bottom line: V5 turns SCADA signals into validated, reviewable records that support compliant release—no guesswork, no manual copy‑paste, and a defensible chain from sensor to shipment.

19) FAQ

Q1. Is SCADA subject to Part 11?
If SCADA data supports GxP decisions or feeds the eBMR/release process, yes. Apply Part 11/Annex 11 controls: unique users, audit trails, validated calculations and interfaces, e‑signatures where approvals carry meaning, and tested retention/restoration. If you only use SCADA for utilities and never for GxP evidence, document that scope—and keep it that way.

Q2. SCADA vs. HMI vs. MES—what’s the boundary?
SCADA supervises and acquires; the HMI is the operator interface; MES orchestrates steps, records, reviews, and release. Keep approvals and product genealogy in MES/eBMR; use SCADA for real‑time control, alarms, and trends. Blurring those lines multiplies audit risk and slows operators with documentation chores the SCADA layer wasn’t built to handle.

Q3. Do recipe changes in SCADA need approval and signatures?
If the parameter affects product quality, safety, or compliance, yes. Capture identity and the meaning of the action. Either implement e‑signatures in SCADA with full audit trails or require that the approval occurs in MES with automatic setpoint download. Storing recipes as Word files on a share is not control; that’s a discovery exhibit.

Q4. Can SCADA be cloud‑hosted?
Possibly, but only if you address latency, availability, security, and CSV. You must prove you can export complete, unaltered records on demand, preserve time synchronization end‑to‑end, and maintain segregated access. Many plants use a hybrid: on‑prem control with cloud analytics for non‑real‑time insights. Document the architecture and validate the parts that touch GxP.

Q5. What do inspectors ask for first?
Alarm history with operator responses, audit‑trail extracts for configuration and recipe changes, user/role lists with recent provisioning changes, time‑sync evidence, backup/restore results, and validated interface logs to eBMR/LIMS. If you can produce those in minutes with clear attribution and dates, the conversation improves. If you need a day to piece together CSVs, expect findings.

Q6. How do I show ongoing control?
Run monthly reviews of alarm metrics, audit‑trail events, and CPV charts. Escalate chronic failures into CAPA and show measured outcomes—reduced excursions, stabilized Cpk, and fewer deviations. Tie SCADA changes to Change Control with documented risk and testing. Without that cycle, you’re not controlling; you’re reacting.


Related Reading
• Foundations: Part 11 | Annex 11 | GAMP 5 | CSV | NIST
• Systems & Records: MES | eBMR | LIMS | ELN
• Integrity & Governance: Data Integrity | Audit Trail | UAM | Record Retention | Internal Audit
• Lifecycle & Monitoring: Process Validation | CPV | UQ | Environmental Monitoring | Temperature Mapping | Cp/Cpk | SPC


You're in great company