Skip to content
OT Cybersecurity | | | 10 min read

Sabotage and Human Error - Underestimated Threats in OT Environments

Insider threats and human error in OT - incidents, statistics, and 12 organizational controls based on IEC 62443.

M
Michal Stepien
insider threathuman errorOTIEC 62443personnel securitysabotage
Sabotage and Human Error - Underestimated Threats in OT Environments
)}

February 2000. On the outskirts of Sunshine Coast in Australia, residents begin complaining about a stench. Bicycle paths along Eudlo Creek are covered in foul-smelling sludge. Fish in the stream are dying. Raw sewage appears on the grounds of the Hyatt Regency hotel. For two months, no one can identify the cause - sewage pump failures happen regularly, after all.

It was not until April 2000 that it became clear that a single person was responsible for 46 incidents: Vitek Boden, a former employee of Hunter Watertech, the company that installed the SCADA system at the Maroochy Shire Council treatment plant. Boden, who had been denied employment by the council, drove around the area in a car filled with stolen radio equipment, issuing commands to pump stations. In total, he released 800,000 liters of raw sewage into parks, rivers, and hotel grounds.

Sentence: two years in prison. Lesson: the first documented cyberattack on critical infrastructure was the work of an insider.

Why the human factor dominates in OT

OT environments differ from typical IT environments in one key respect: operators have direct, physical access to processes that can cause real harm - chemical spills, power outages, destruction of equipment worth millions. Every mistake - whether deliberate sabotage or a simple configuration error - can have consequences that cannot be undone with a click of the “undo” button.

The Verizon DBIR 2025 report indicates that the human element accounts for 60% of security breaches. In the EMEA region, 29% of breaches originate from inside the organization - six times more than in North America (5%) and nearly thirty times more than in the APAC region (1%).

In the OT context, the problem is even more serious. The SANS ICS/OT 2025 survey (330 respondents) shows that 21% of organizations experienced an OT incident in the past year, and 40% of them reported operational disruptions. Over 20% needed more than a month to restore normal operations.

60%

of breaches involve the human element (Verizon DBIR 2025)

29%

of EMEA breaches originate from insiders

81 days

average time to detect an insider threat (Ponemon 2025)

$17.4M

annual cost of insider incidents per organization

Sources: Verizon DBIR 2025, Ponemon Cost of Insider Risks 2025

Three faces of insider threat

Insider threats are not limited to malicious saboteurs. The Ponemon Institute’s Cost of Insider Risks 2025 report classifies them into three categories:

1. Negligent insider

Most commonly: an operator who skips a procedure, uses a shared password, or connects an unverified USB device to an engineering workstation. This is not malice - it is routine, fatigue, or lack of awareness.

Example - Davis-Besse, 2003: The SQL Slammer worm entered the Davis-Besse nuclear power plant network in Ohio through an unsecured contractor T1 connection, bypassing the firewall. For nearly 5 hours, the Safety Parameter Display System (SPDS) was unavailable. Operators had no visibility into critical safety data. The cause? A Microsoft patch that had been available for six months was never applied, and the contractor’s network connection was unknown to the security team.

2. Malicious insider

Deliberate action: sabotage, data theft, process manipulation. Motivation: revenge after termination, financial gain, pressure from third parties.

Example - Tesla, 2018: Employee Martin Tripp modified the Tesla Manufacturing Operating System (MOS) code, exporting gigabytes of confidential data, including photographs and video from production lines. He made changes under fake user accounts.

Example - Tesla, 2020: Russian citizen Egor Kriuchkov offered a Tesla factory employee in Nevada one million dollars to install ransomware on the plant’s network. The employee reported the attempt to the FBI, and Kriuchkov was arrested. This case demonstrates that the boundary between an insider and an external attack is often blurred.

3. Compromised account (compromised insider)

An attacker takes over an employee’s login credentials and acts on their behalf. From the perspective of monitoring systems, this is a “normal” user performing “normal” operations - making detection exceptionally difficult.

Example - Oldsmar, 2021: Someone remotely logged in via TeamViewer to a water treatment station in Florida and changed the sodium hydroxide concentration from 100 to 11,100 ppm. Shared passwords, no MFA, and an outdated Windows 7 system made this incident possible. We discuss this further in our analysis of the Oldsmar attack.

The Great Blackout of 2003 - when one error stops a continent

On August 14, 2003, shortly after 4:10 PM Eastern Time, 55 million people in eight US states and the province of Ontario in Canada lost power. The blackout lasted up to two days, caused estimated losses of $6 billion, and contributed to 11 deaths.

The cause was not a cyberattack in the classic sense - it was a sequence of human and technical errors:

  1. Neglected infrastructure - FirstEnergy’s high-voltage lines touched overgrown trees in a transmission right-of-way that no one had trimmed
  2. Software bug - a race condition bug in the GE XA/21 energy management system disabled alarms at the FirstEnergy control center for over an hour
  3. Lack of situational awareness - operators did not know the grid was overloaded and failed to redistribute the load in time
  4. Cascading failures - without intervention, a local failure spread across the entire northeastern grid

WARNING

The 2003 blackout did not require a hacker. All it took was neglected maintenance, an untested alarm, and operators who lacked a complete picture of the situation. In OT, a lack of procedures and awareness can have effects comparable to a deliberate attack.

IEC 62443-2-1 - requirements for the human factor

The IEC 62443-2-1 standard defines requirements for a cybersecurity management system (CSMS) in industrial automation environments. The human factor runs throughout the entire document, but key requirements are concentrated in several areas:

IEC 62443-2-1 AreaRequirementObjective
Awareness and trainingPeriodic cybersecurity training for all OT personnelReducing errors from ignorance
Personnel access controlAssigning privileges based on the principle of least privilegeLimiting the scope of a potential incident
Account managementIndividual accounts, prohibition of password sharing, access reviewsAccountability and activity tracking
Personnel vettingBackground checks for employees with access to critical systemsPrevention of insider threats
Change managementFormal procedure for approving changes to system configurationsPreventing unauthorized modifications
Incident responseResponse plan covering insider threat scenariosRapid identification and isolation of the problem

More on the practical application of IEC 62443 in the context of zones and conduits is described in our article on Defense in Depth in DCS systems.

TIP

IEC 62443-2-1 requires that cybersecurity training be tailored to the employee’s role. A DCS operator needs different training than an OT network administrator or a maintenance manager. Generic “IT training” does not meet this requirement.

12 organizational controls - implementation checklist

Based on the requirements of IEC 62443-2-1, CISA insider threat guidelines, and practices described by the Ponemon Institute, below are 12 controls that help mitigate the risk from the human factor in OT:

People

  • Background vetting - screening candidates before granting access to critical systems
  • Role-tailored training - operators, engineers, contractors - each requires a different program
  • Incident reporting culture - an employee who reports an error must not be penalized (Verizon DBIR 2025 shows that reporting rates increase 4x after appropriate training)
  • Offboarding procedure - immediate revocation of access for departing employees (the Boden case at Maroochy Shire shows what happens when this is missing)

Processes

  • Four-eyes principle - critical changes to OT configuration require approval from two people
  • Vendor management - control of contractor remote access, connection logs, time restrictions (Davis-Besse: an uncontrolled contractor T1 connection bypassed the firewall)
  • Access reviews - quarterly verification of who has access to what; removal of stale accounts
  • Insider threat response plan - a separate playbook for incidents involving employees, covering legal and HR aspects

Technology

  • Individual accounts - no more shared passwords on HMI stations (Oldsmar: one TeamViewer login for the entire team)
  • Privileged activity monitoring - session recording on engineering workstations and operator consoles
  • Segmentation and remote access control - limiting lateral movement even after perimeter breach. More on this in our ICS remote access guide
  • Behavioral alerts - detecting anomalies in user behavior: logins at unusual times, access to systems outside scope of duties, mass data downloads

The cost of ignoring the problem

The Ponemon Institute’s 2025 report estimates that the average annual cost of insider incidents is $17.4 million per organization - up from $16.2 million in 2023. The average cost of a single incident is $676,517, and for a malicious insider - $715,366.

Importantly, companies spend an average of $211,021 on containing an incident, but only $37,756 on monitoring. This disparity - a 5.6x difference - shows that most organizations react instead of prevent.

Time also has a price: insider incidents contained within 31 days cost an average of $10.6 million. Those that drag on beyond 91 days - $18.7 million.

What to do - a practical approach

Insider threats in OT do not require implementing revolutionary solutions. They require consistency in three areas that organizations often neglect:

  1. Awareness - people need to know what can go wrong and what warning signs look like. This is not about fear - it is about building competence.

  2. Processes - procedures must exist, be current, and be followed. An offboarding procedure that no one executes does not protect against the next Boden.

  3. Technology - monitoring, segmentation, access control - these are tools that help detect and limit the impact of incidents. But they are not sufficient on their own if people bypass them.

SEQRED helps organizations build OT security programs covering all three dimensions - from IEC 62443 audits to testing organizational controls through red teaming.


Sources

Omówimy zakres, metodykę i harmonogram.