Skip to content
OT Cybersecurity | | 24 min read

There Is No Minimum OT Security Checklist - Why Compliance Is Not Security

IEC 62443 without risk assessment is paper armor. Five cases where literal implementation of SR 1.1, SR 4.1 and SR 3.4 made OT security worse.

Józef Sulwiński Józef Sulwiński
OT securityIEC 62443NIST 800-82compliancelegacy systemshardeningzones and conduits

“What is the minimum set of controls I need to implement so I can sleep at night?”

I hear this question at every industry conference. We are searching for a Holy Grail - a universal checklist that will satisfy the regulator, the auditor, and secure the industrial process. My conclusion is brutal: no such list exists. Every attempt to “simplify to the minimum” is an acceptance of risk you do not know about.

Standards Are Not a Checklist - They Are a Decision Framework

Let us start with a fundamental misconception. IEC 62443, NIST SP 800-82 and NIS2 define security requirements (SR). Many managers treat them as a tick-box list: “I will implement SR 1.1 (authentication), SR 4.1 (encryption), network segmentation - done.”

But IEC 62443 deliberately prohibits this approach. The sequence defined in IEC 62443-3-2 is unambiguous:

  1. Risk assessment - identify threats specific to the environment
  2. Partition into zones and conduits - group systems with common security requirements
  3. Assign a Security Level Target (SL-T) - for each zone individually, based on the identified threats
  4. Select security controls (SR) appropriate to the SL-T of each zone

Anyone who skips steps 1-3 and jumps straight to implementing controls builds paper armor - documentation that looks good during an audit but protects against nothing specific.

Encryption (SR 4.1) - A Control That Can Kill

SR 4.1 (Information Confidentiality) requires “protection of confidentiality of information in transit across untrusted networks through encryption.”

An auditor arrives, sees unencrypted Modbus TCP communication between an engineering workstation and a PLC, and writes a non-conformity. The fix seems simple: deploy TLS. Except:

AspectOT Reality
LatencyTLS adds 5-100 ms per packet. In Safety and power systems, every additional millisecond introduces process risk (see below).
Controller supportSiemens S7-300, Allen-Bradley MicroLogix, most legacy controllers do not support TLS. Firmware has been frozen for years.
Process certificationIn pharma (GMP) and nuclear energy, any change to controller communications requires full process revalidation - weeks of testing, hundreds of thousands in costs.
Modbus/TCP Security specHas existed since 2018 (port 802). In practice, a fraction of a percent of devices on the market implement it.

Why milliseconds matter - from a home circuit breaker to a 110 kV substation

A residual current device (RCD) in a home distribution board must trip within 40 milliseconds (IEC 61008, at 5x rated residual current) - fast enough to save a life.

At a 110 kV power substation, protection relays communicate using IEC 61850 GOOSE messages. When one relay detects a line fault, it sends a trip command to the others. The standard requires this message to arrive within 3 milliseconds - more than ten times faster than a home RCD. A fault on a 110 kV line that persists a few milliseconds too long means a transformer fire and a blackout affecting tens of thousands of consumers.

Should such a protection message be cryptographically signed? Of course - an unauthorized trip command means a blackout, blocking a legitimate one means a transformer fire. But older intelligent electronic devices (IEDs) lack the processing power to compute an HMAC signature within 3 milliseconds. And if a certificate expires? The protection message is rejected as unauthorized - by the very mechanism designed to protect it.

CISA confirmed this in its February 2026 report “Barriers to Secure OT Communication: Why Johnny Can’t Authenticate” - critical infrastructure operators are deliberately refusing to deploy encryption and cryptographic signing out of concern that cybersecurity mechanisms will disable process safety.

WARNING

Blindly deploying SR 4.1 in a Safety zone (SIL-rated) without analyzing the impact on system response time is not “improving security” - it is introducing new process risk. IEC 62443 defines encryption as a requirement on the conduit between zones, not on every connection within a zone.

What to do instead:

Partition the system into zones. Determine which conduits traverse untrusted networks (corporate network, VPN, Internet). Encryption is required on those conduits. Inside a Safety zone - where latency is critical - use other controls: physical isolation, VLAN with port-level access control, passive monitoring (NDR).

Authentication (SR 1.1) - A Requirement Impossible to Meet “Out of the Box”

SR 1.1 (Human User Identification and Authentication) requires: “the control system shall provide the capability to identify and authenticate all human users on all interfaces that provide access.” Sounds reasonable. Look at the reality:

Scenario 1: HMI Panel on the Production Floor

A DCS operator must respond to a process alarm within 3 seconds. The HMI panel is shared across shifts. Implementing per-user login with a password means:

  • The operator sees the alarm
  • Must first log in (enter credentials or swipe a badge)
  • Only then can acknowledge the alarm and take corrective action

In a Safety system (SIL 2/3), this delay can have physical consequences. This is why most plants run HMI panels on shared accounts without authentication - and the auditor sees it.

Scenario 2: PLC on Modbus TCP

MITRE ATT&CK for ICS documents technique T0836 - Modify Parameter: an attacker modifies controller parameters, affecting the physical process. This is exactly what FrostyGoop did - crafted Modbus commands shut down heating in 600 buildings in Lviv.

The Modbus TCP protocol has no authentication mechanism whatsoever. Any device on the network can send a command to the controller and the controller will execute it. There is no way to “implement SR 1.1” at the protocol level - because the protocol does not support it.

Scenario 3: Default Manufacturer Credentials

MITRE ATT&CK ICS T0812 - Default Credentials: attackers exploit default manufacturer credentials. In OT this is not negligence - it is a deliberate decision: “what if someone needs to log into the controller at 3 AM and the only person who knows the password is on vacation?”

ControlIn ITIn OT
Enforce changing default passwordsStandard - Active Directory enforces on first logonChanging a PLC password may require a controller restart = process downtime
MFAStandard on VPN, email, applicationsHMI panel with MFA at a 3-second response time = process safety risk
Password rotation policyEvery 90 days, system-enforcedPLC does not support rotation - password is flashed into firmware
Account lockout after failed attemptsStandard - 5 attempts, 15-min lockoutLocking out a DCS operator account during a process alarm = catastrophe

TIP

The solution is not “implement SR 1.1 everywhere” but proper zone partitioning. In the operator zone (SL-1/SL-2) - role-based authentication at the SCADA application level. On the IT-OT conduit - a jump server with MFA and full session logging. At the Internet boundary - VPN with certificates. Inside the Safety zone - physical access control + anomaly monitoring.

Vendor “Compliance” - A Certificate That Says Nothing About Your Environment

Many vendors declare “IEC 62443 compliance.” Let us unpack that:

  1. IEC 62443-4-2 component certificate - the vendor certified a PLC against component-level requirements. Your system is not a component - it is dozens of components, networks, configurations and 15 years of customization. A certificate on a bolt does not certify a bridge.

  2. Security Level 1 - most certificates are issued at SL-1, which protects against “casual and unintentional violations.” SL-1 does not protect against intentional attacks (that is SL-2+). But the sales documentation simply states “IEC 62443 compliant.”

  3. Hardening guide vs. warranty - the vendor publishes a hardening guide but disclaims: “implementing recommendations beyond default configuration may affect warranty and technical support.” Translation: “you can harden it, but if something breaks - don’t call us.”

  4. Quarterly patches - the vendor releases patches. Before a patch passes the system integrator’s tests, integration tests, regression tests and a maintenance window - 6-12 months elapse. During that time the system is vulnerable to known exploits.

MITRE ATT&CK for ICS - How Attackers See Your “Minimum”

Let us stop thinking about minimum controls. Look from the attacker’s perspective:

TacticTechniqueReal-World ExampleWhich “minimum” addresses this?
Initial AccessT0866 - Exploitation of Remote ServicesIndustroyer - exploitation of IEC 104, IEC 61850 protocolsSegmentation + monitoring - but only if the conduit is properly defined
Lateral MovementT0859 - Valid AccountsTRITON - stolen credentials to access SISAuthentication - but SIS does not support MFA
ImpactT0836 - Modify ParameterFrostyGoop - modified ENCO controller parameters via ModbusNone of the “minimum” - Modbus has no authentication. The only protection is zone isolation.
ImpactT0831 - Manipulation of ControlStuxnet - manipulation of uranium centrifuge speedsProcess anomaly monitoring - a control absent from any “minimum checklist”

Attackers do not review your compliance checklist or look for gaps in documentation. They look for the weakest point - and “minimum controls” guarantee there are many.

When a Regulatory Requirement Collides with Controller Reality

These cases are not hypothetical. Each shows a situation where a regulatory requirement cannot be literally implemented in an OT environment.

Case 1: “Implement Encrypted Communications” - Poland, Energy Sector, December 2025

Requirement: SR 4.1 (IEC 62443-3-3) - protect confidentiality of data in transit. NIS2 Art. 21(d) - “security of supply chains, including security-related aspects concerning the relationships between each entity and its direct suppliers.”

Reality: On December 29, 2025, Sandworm attacked at least 30 wind and solar farms plus a CHP plant serving nearly half a million customers in Poland. Attackers gained access through vulnerable edge devices with default credentials. They deployed a wiper, destroyed HMI data, and damaged RTU firmware. CISA issued an alert in February 2026.

Collision: An auditor might say “encrypt RTU communications.” Problem: RTUs on wind farms often communicate via IEC 104 or Modbus TCP protocols that do not support encryption. Replacing RTUs scattered across tens of kilometers of wind farms is a multi-million project taking months. Meanwhile, the actual attack vector was default credentials on routers - something that does not require hardware replacement but requires an administrative process (configuration review, credential rotation, hardening).

Case 2: “Implement Multi-Factor Authentication” - Oldsmar, Florida, 2021

Requirement: SR 1.1 RE 2 (IEC 62443-3-3) - multi-factor authentication for remote access. NIS2 Art. 21(j) - “use of multi-factor authentication solutions.”

Reality: At the Oldsmar water treatment plant, all computers shared a single TeamViewer password. An attacker remotely changed the sodium hydroxide concentration from 100 to 11,100 ppm. An operator noticed in time.

Collision: NIS2 literally requires MFA. But in a small water treatment plant:

  • Operators work on shared shift workstations
  • The SCADA system runs on Windows 7, which does not support modern MFA solutions
  • Replacing the SCADA system means revalidating the entire treatment process
  • A small municipality’s budget does not allow replacing a system that “works”

What actually failed? Not the absence of MFA - but the absence of change management (TeamViewer still active six months after being decommissioned), software inventory, and segmentation (SCADA visible from the Internet). Administrative controls, not technical.

Case 3: “Ensure Safety System Integrity” - TRITON, Saudi Arabia, 2017

Requirement: SR 3.4 (IEC 62443-3-3) - software integrity verification. IEC 61511 (SIS/Safety) - the process safety system must be independent and maintain integrity.

Reality: The TRITON malware compromised a Schneider Electric Triconex SIS controller - a system designed to shut down the process in case of explosion or fire. The attacker loaded a backdoor directly into the controller firmware (0-day exploit).

Collision: The standard requires “software integrity verification.” But the Triconex SIS controller:

  • Has no runtime firmware integrity verification mechanism
  • Does not log firmware changes - no audit trail
  • Does not support remote antivirus scanning (it is a safety-critical system with timing determinism requirements)
  • The only “verification” is checksum comparison - but an attacker with access to the engineering workstation can replace the checksum too

What actually failed? The SIS engineering workstation was reachable from the OT network - isolation was logical, not physical. The attacker spent a year in the corporate network before pivoting to OT. Had the SIS been physically isolated (air-gap or unidirectional data diode), TRITON would have had no way to reach the controller. This is not a matter of “controls on the controller” - it is a matter of zone architecture.

Case 4: “Deploy Malware Protection” - CrowdStrike, July 2024

Requirement: SR 3.2 (IEC 62443-3-3) - protection against malicious code. NIS2 Art. 21(f) - “policies and procedures to assess the effectiveness of cybersecurity risk management measures.”

Reality: In July 2024, a faulty CrowdStrike Falcon update caused BSOD on 8.5 million Windows systems. Losses: over $5 billion. Production and energy systems were also affected.

Collision: The organization deployed EDR (checklist control ticked). That same control caused a global outage of the systems it was supposed to protect. In OT, where availability is paramount, a security agent that itself causes unavailability is worse than no agent.

SR 3.2 requires “protection against malicious code” - but does not specify that it must be an EDR agent on the endpoint. In OT, the appropriate control is often:

  • Application whitelisting (instead of signature-based scanning)
  • Passive network traffic monitoring (NDR)
  • USB control at the zone boundary
  • Media verification procedure before connection

Each of these controls fulfills the objective of SR 3.2 without the risk of BSOD on a process controller.

Case 5: “Maintain Audit Logs” - Legacy Systems in Poland

Requirement: SR 6.1 (IEC 62443-3-3) - audit logging, security event recording. NIS2 Art. 21(b) - “incident handling.”

Reality: PLCs Siemens S7-300/400, Allen-Bradley SLC 500/MicroLogix, GE Fanuc Series 90 - installed in thousands of plants - have no logging mechanism whatsoever. They do not record who changed a parameter, when, or from which IP address. Modbus and DNP3 protocols do not provide for slave-side logging.

Collision: The auditor requires “security event logs” on a controller that is technically incapable of logging anything. Replacing the controller costs hundreds of thousands, requires production downtime and process revalidation.

Compensating control: Instead of logging on the controller - passive network traffic monitoring (NDR) recording every command sent to the PLC. This fulfills the objective of SR 6.1 (we know who changed what and when) without touching the legacy controller. But this control requires a conscious architectural decision - it does not follow from any “minimum checklist.”

What These Five Cases Demonstrate

In each case, the regulatory requirement is sensible as an objective - we want confidentiality (SR 4.1), authentication (SR 1.1), integrity (SR 3.4), malware protection (SR 3.2), auditability (SR 6.1). No one disputes that.

But literal implementation at the controller level is impossible - because the controller does not support it, the protocol does not provide for it, or the implementation itself degrades process safety (CrowdStrike BSOD, TLS latency on Safety systems).

The solution in every case lies one level up:

CaseRequirementWhere the Solution Lies
Poland 2025Encryption (SR 4.1)Administrative process - edge device configuration review, default credential rotation
Oldsmar 2021MFA (SR 1.1 RE 2)Process - change management, software inventory, segmentation
TRITON 2017Integrity (SR 3.4)Zone architecture - physical SIS isolation, data diode
CrowdStrike 2024Anti-malware (SR 3.2)Conscious control selection - whitelisting, NDR instead of endpoint agent
Legacy PLLogging (SR 6.1)Compensating control - passive network traffic monitoring

Four types of security controls - and why checklists focus on just one

Security controls fall into four categories. Most checklists and audits concentrate on the first - and ignore the other three:

Control typePurposeOT examplesDoes the checklist cover it?
PreventiveStop incidents from happeningEncryption, MFA, segmentation, firewall, hardeningYes - this is 90% of the checklist
DetectiveDetect that something is happeningNDR, passive monitoring, log analysis, SIEMPartially - if the controller can’t log, the auditor doesn’t ask
CorrectiveRestore state after an incidentIR plan, PLC config backup, recovery procedures, tested recoveryRarely - “do you have backups?” doesn’t mean “have you tested recovery?”
CompensatingReplace a control that can’t be implementedNDR instead of PLC logging, physical isolation instead of encryption, whitelisting instead of EDRAlmost never - requires understanding context

Looking at the five cases through this lens:

  • Poland 2025: Auditor demanded a preventive control (encryption). The real problem required an administrative control - configuration review and default credential rotation.
  • Oldsmar: Missing detective and corrective controls - TeamViewer active six months after being decommissioned, no software inventory.
  • TRITON: Missing preventive control at the right level - zone architecture, not an agent on the controller. Detective controls also failed - alarms were ignored.
  • CrowdStrike: A preventive control (EDR) itself became a source of failure. Missing compensating controls - whitelisting, NDR - that achieve the same objective without BSOD risk.
  • Legacy PLC: Detective control impossible on the controller. Solution: a compensating control - passive NDR monitoring network traffic without touching the device.

TIP

A lock on the door (preventive control) is useless if nobody locks the door (no administrative process), nobody checks whether the lock works (no detective control), and nobody knows what to do when someone forces it (no corrective control). Security is not a tool to buy - it is a set of conscious decisions spanning all four control types.

Common conclusion: searching for a “minimum set of tools” is asking only about preventive controls. Security requires all four types - and decisions about which controls are adequate for the context of each zone.

NIS2 - The Proportionality Principle That Auditors Forget

Article 21 of the NIS2 Directive requires essential and important entities to implement “appropriate and proportionate technical, operational and organisational measures.” The key word: proportionate.

When assessing the proportionality of measures, account must be taken of: the degree of the entity’s exposure to risks, the entity’s size, and the likelihood of occurrence of incidents and their severity, including their societal and economic impact.

This means:

  • NIS2 does not require TLS encryption on every Modbus connection in an industrial facility - it requires measures proportionate to the risk
  • NIS2 does not require MFA on a DCS operator HMI panel - it requires authentication proportionate to the context
  • NIS2 does not define specific technologies - it requires demonstrating that the organization consciously manages risk

Overinterpretation of NIS2 is as dangerous as underinterpretation. An auditor who demands controls without considering the OT context imposes requirements the directive does not set - and may effectively degrade security (e.g., by introducing latency in Safety systems).

Why We Must Stop Searching for the Minimum

Compliance as a Goal - A Pathology

Regulations (NIS2, IEC 62443) are essential - they enforce a minimum of attention to security. But treating them as a goal instead of a starting point leads to pathology:

  • Audit as theater - the organization invests in documentation and procedures that look good on paper but do not reflect operational reality
  • Optimizing for the minimum - “what is the smallest budget that will get us through the audit?” instead of “what budget is proportionate to our threats?”
  • False sense of security - “we passed the NIS2 audit, so we are secure” - while the IT/OT firewall still has an “allow any any” rule added “temporarily” 3 years ago

What to Do Instead

1. Start from risk assessment, not from a checklist

IEC 62443-3-2 requires risk assessment as step zero, before any controls. This is not a formality - it is the foundation of the security architecture. Without understanding threats specific to the environment, there is no way to determine which controls are appropriate.

2. Partition the system into zones and conduits

Only after risk assessment do you define zones and conduits. Each zone gets its own Security Level Target (SL-T). A Safety zone (SIL) will have different requirements than a DMZ zone, which in turn differs from the corporate zone. A “minimum list” does not account for these differences.

3. Select controls appropriate to the zone, not “universal” ones

SR 4.1 (encryption) on the IT-OT conduit - yes. SR 4.1 inside a Safety zone with a 3 ms latency requirement - no. Instead: physical isolation + passive monitoring + port-level access control.

SR 1.1 (authentication) on the OT jump server - yes, with MFA and full session logging. SR 1.1 on the DCS operator HMI panel with a 3-second response requirement - not in that form. Instead: role-based authentication at the SCADA application level + physical access control to the operator zone.

4. Document decisions, not just outcomes

It is not enough to write “network segmentation deployed.” Document why those specific zones, why that SL-T, why those controls and not others, and what residual risk you accept. This is the difference between paper armor and managed risk.

5. Validate continuously, not once

OT penetration testing (not Nessus scanning - manual tests accounting for industrial protocol specifics), incident response exercises, security architecture review after every environment change.

The OT Paradox: The More Stable the Environment, the Greater the Risk

In IT, systems are updated weekly. New versions, patches, configurations. IT demands constant attention because it constantly changes.

OT is different. The same PLCs for 10 years. The same SCADA for 15. The same network for 20. “Nothing changes here” - I hear this regularly. And it is precisely this stability that is the greatest source of risk in OT security.

Two Worlds, Two Adaptation Models

ITOT
Rate of system changeContinuous - weekly patches, quarterly upgradesMinimal - same firmware for years
Threat adaptationAutomatic - EDR updates, SIEM rules change, patches close new CVEsNone - a 2015 controller does not know FrostyGoop exists
Rate of environmental changeFast - but systems keep paceFast - but systems do not keep pace
OutcomeSystem ages alongside threatsSystem stands still, threats accelerate away

In IT, the system and threats evolve in parallel. New exploit - new patch. New attack technique - new detection rule. It is an arms race, but at least both sides are running.

In OT, the system stands still. A 2015 PLC has exactly the same defenses today as on the day it was installed - which is to say, none. But the threat catalog grows every year:

  • 2015: BlackEnergy attacks the Ukrainian power grid - no one thought controllers could be targets
  • 2017: TRITON attacks Safety Instrumented Systems (SIS) - the “last line of defense” turns out to be vulnerable
  • 2022: Industroyer2 - second-generation malware targeting electrical substations
  • 2024: FrostyGoop - first malware using Modbus TCP for physical sabotage

The 2015 controller has not changed one bit. The world around it - fundamentally.

What Changes When “Nothing Changes”

Even if no one touches the OT system, everything around it changes:

Connection architecture - the “temporary” VPN for remote service added 3 years ago still runs. The energy monitoring system connected to the OT network “read-only.” The vendor cloud integration “for predictive maintenance.” Each new connection is a new attack vector that did not exist in the original risk assessment.

People - the engineer who knew the architecture and understood why the firewall has that specific rule has left. His successor does not touch the configuration because “it works.” Documentation either does not exist or has been outdated for years.

Regulations - NIS2 (2026) sets requirements that did not exist when the system was designed in 2015. The organization must demonstrate compliance with standards the system was never built to meet.

Attack techniques - MITRE ATT&CK for ICS documented 78 techniques in 2020. In 2026 - over 100. Each new technique is a potential vector that your 2015 controls may not address.

WARNING

An organization that deployed network segmentation in 2020 and has not verified it since probably has segmentation only on paper. How many “temporary exceptions” were added in the meantime? How many new VPN connections? How many firewall configuration changes “for a project”? Without regular review, every security architecture degrades.

The PDCA Cycle - The Only Answer to the OT Paradox

IEC 62443 deliberately embeds a continuous improvement cycle (PDCA: Plan-Do-Check-Act). A one-time standard implementation is the beginning, not the end. This is the answer to the paradox: since OT systems do not adapt on their own, someone must adapt them.

PhaseWhat It Means in OT Practice
PlanRisk assessment reflecting current threats, not those from a document written 3 years ago
DoDeploy controls appropriate to the current SL-T - including compensating controls for legacy systems
CheckPenetration tests, configuration review, IR exercises - verify that controls still work in the changed environment
ActCorrect: new threat = new control. Architecture change = new risk assessment. Key person departure = knowledge transfer and documentation update.

Bruce Schneier wrote in 2000: “Security is a process, not a product.” In OT, that statement needs an addendum: OT security is a process of compensating for the fact that the system does not adapt on its own. Every day without a review is a day when the gap between the state of defenses and the state of threats widens.

Why Administrative Controls, Not Just Technical

A conclusion that seems counterintuitive to many engineers: since controllers do not support encryption, protocols do not provide authentication, and latency does not permit MFA - administrative controls become the foundation of OT security.

This is not an alternative to technology. It is a necessity arising from technology’s limitations:

  • Change management procedures - because there is no automatic system that will detect someone adding a new connection to the OT network
  • Regular risk assessments - because a PLC will not inform you that a new threat has emerged
  • Training and awareness building - because an operator who does not understand why connecting a USB drive to the engineering workstation is prohibited will bypass any technical control
  • Architecture and decision documentation - because in 3 years no one will remember why the firewall has that specific configuration
  • Incident response plan - tested, not sitting in a drawer - because in a system without logging and monitoring, the only chance of detecting an attack is a human noticing a process anomaly
  • Periodic reviews - because this is the only “adaptation” mechanism in an environment that technically does not adapt

TIP

Technology without process does not work - in any environment. A lock on the door is useless if nobody locks it. EDR without update management is CrowdStrike. Encryption without certificate management is an expired cert dropping a protection message at the substation. An organization that looks for a “minimum set of tools” without investing in processes, architecture and people is building a paper armor.

Conclusion

OT security is not a set of tools to purchase - it is a process of continuously assessing adequacy. Every “simplification to the minimum” is an acceptance of risk you do not know about.

Standards provide a decision framework, but they require conscious application: risk assessment, zones and conduits, control selection appropriate to each zone’s context.

Organizations that understand this build real resilience. The rest build paper armor.

Sources

Need help in this area?

Our experts will help you assess the risk and plan next steps.

Talk to an expert

We'll discuss scope, methodology, and timeline.

+48 22 292 32 23 Talk to an expert