STEM to SecOps
STEM to SecOps: How Evidence-Driven Security Improves Patient & Public Safety (EU Focus)
Author: Marçal Santos, (CISM, CDPSE)
Date: November 4, 2025
TL;DR
Bringing a STEM mindset—clear hypotheses, measurement, repeatable experiments, and peer review—into security operations (SecOps) turns cyber risk into a patient-safety and public-safety program you can defend to boards and regulators under NIS2. Below are EU-grounded examples and seven STEM-style metrics to run every week.
Why STEM thinking belongs in security
STEM disciplines value evidence over opinion. Applied to SecOps, that means:
Formulate a hypothesis (“If EHR storage fails, urgent care continues in read-only mode within 15 minutes.”)
Instrument the system (time-stamped logs, playbook step evidence, clinical throughput counters).
Run the experiment (tabletop or live drill), then peer-review and iterate.
This approach replaces “we think we’re ready” with defensible proof—vital when healthcare, public services, or critical infrastructure are on the line.
EU cases: from headlines to lessons learned
UK NHS WannaCry (2017): large-scale appointment cancellations showed how availability failures become clinical risks.
HSE Ireland (2021): a whole-of-system response highlighted the need for isolation at scale, supplier governance, and evidence-backed runbooks.
Netherlands Cervical Screening Data: Our recent analyses break down what happened and the operational takeaways for identity, data handling, and incident communications:
For board reporting mechanics and people/skills implications, you may also find these helpful:
NIS2 and the culture of evidence
NIS2 expects risk-based measures and timely, staged incident reporting. Practically, that means your IR tooling and playbooks should produce auditable timelines (awareness → early warning → 72-hour notification → 1-month report), supplier artifacts, and decision logs. A STEM-style SecOps program makes this routine, not heroic.
7 STEM-style metrics every EU healthcare/public org should run
Each metric includes What it proves, How to measure, and What to show the board/regulator.
MTTR (Mean Time to Recover) for Tier-1 services
Proves: You can restore clinical/critical services quickly.
Measure: Timestamp incident start → service SLO restored; track by service (EHR, imaging, lab, e-prescribing, emergency dispatch).
Show: Median MTTR, worst case, trend vs last quarter.
Restore Success Rate (Backups → Validated Restores)
Proves: Backups actually work.
Measure: Quarterly live restores for each Tier-1, with data integrity checks.
Show: % successful, oldest last-successful date per system, RTO/RPO variance.
KEV/Critical Vulnerability MTTR
Proves: You close the highest-risk exposures fast.
Measure: Age from “first seen” to “fully mitigated”; separate patched vs compensating controls (e.g., segmentation or patchless/runtime protection for legacy modalities).
Show: % closed within policy (e.g., KEV ≤7 days), median days, legacy assets under compensating control.
Supplier IR SLA Coverage (Critical Vendors)
Proves: You can act beyond your four walls.
Measure: % of critical suppliers with 1-hour notify + hourly status for Sev-1; joint tabletops within last 12 months.
Show: Coverage %, last tabletop date, examples of vendor evidence packets.
Clinical/Public Service Continuity Index
Proves: Cyber resilience protects outcomes (not just servers).
Measure: During incidents/drills, correlate SOC timeline with operational throughput (e.g., % procedures done on time, diversion counts).
Show: On-time %, average delay, mitigations that improved outcomes.
Identity Assurance & Access Hygiene Score
Proves: Strong identity reduces blast radius.
Measure: % privileged sessions brokered/recorded; % of workforce on MFA/passkeys; stale accounts >30 days disabled; vendor access via JIT.
Show: Scores by role/vendor; month-over-month improvements.
NIS2 Reporting Timeliness
Proves: Governance and operational discipline.
Measure: Awareness time → early warning (≤24h) → 72h notification → final report (≤1 month), with receipts attached.
Show: % on-time at each stage; median hours; gaps and fixes.
Mini-playbooks: fast “experiments” to run this month
EHR Read-Only Drill (2 hours): Prove triage/admissions continue while primary DB is degraded. Track MTTR to read-only and clinical throughput.
Imaging Downtime Procedure (tabletop + sample): Prove urgent films are read within SLO when PACS is offline. Count deferrals; time comms.
Supplier Black-Hole (15-minute chaos test): Simulate a critical vendor outage. Measure time to publish status-page and IVR messages; capture decisions and evidence.
Implementation roadmap (30/60/90)
30 days: Instrument the 7 metrics; run one live restore; schedule a joint tabletop with your top vendor.
60 days: Reduce KEV/Critical MTTR by X%; migrate vendor access to JIT; eliminate stale accounts.
90 days: Publish your first NIS2 evidence pack with timelines, receipts, and outcomes tied to service continuity.
Further reading from Trescudo
CTA
Download the Trescudo Guide to NIS2 Directive:
https://trescudo.com/assets/guides/the-trescudo-guide-to-the-nis2-directive.pdf