You cannot manage risk you cannot measure, and you cannot measure what you cannot see. Most OT security programs I walk into are still making decisions on incomplete asset inventories, disconnected compliance tools, and vulnerability lists that treat every finding the same, no matter what that device actually does for the business. The teams that are getting this right start with visibility and then work up to risk quantification in a language their board can actually use.

In this week’s feature, I am unpacking what stood out from my conversation with Nicholas Friedman in Episode 101: OT Risk Management That Works - Asset Visibility, Risk Quantification & CISO-Level Strategy and layering in what I keep seeing at power plants, manufacturing sites, and control rooms where the “inventory” on paper and the reality in the field almost never match.
Why Asset Visibility Is Where I Always Start
I have lost count of how many times I have asked for an asset inventory and been handed a spreadsheet that looks impressive until you try to use it.
Hostname. IP address. MAC address. Firmware version. Maybe an OS field if things are really mature.
What is almost always missing is the one thing that matters most for risk: what that asset actually does and how bad it is if it fails.
I have seen plants where operators can walk you through every panel and PLC from memory, but none of that knowledge shows up in the official inventory. On paper, a turbine controller and a break room ice machine controller look identical. Same vendor, same firmware, same CVEs. In real life, they are absolutely not the same level of risk.
That gap between what the spreadsheet says and what the humans know is where risk hides.
Most Teams Are Far Less Visible Than They Think
When I ask leaders how confident they are in their OT asset visibility, the most common answer I hear is something like, “We are probably at 50 percent. We know we are not perfect, but we have a decent handle on things.”
In Episode 101, Nick told a story about a large utility that gave that same answer. They guessed they were at about 50 percent visibility.
Once the full assessment and sensor deployment were done, the real number was 18 percent.
After the hard work, they pushed above 95 percent, but the wake-up call for the board was that initial gap between perception and reality. They thought they had half the environment covered. In reality, more than four out of five assets were effectively invisible from a risk perspective.
From what I see in the field, that utility is not the exception. It is the rule.
If you had to put a number on your own OT visibility right now, would you be comfortable betting your job on it?
Vulnerability Counts Are Not Risk
Security teams like numbers because numbers feel objective.
How many vulnerabilities did we find?
How many critical findings are still open?
How many endpoints are missing a patch?
The problem is that none of those numbers answer the only question boards actually care about: Are we materially safer this quarter than we were last quarter?
You cannot answer that with raw counts. You answer it with context.
When I talk to boards, I rarely mention specific CVEs. I talk about:
· The percentage of critical OT assets where we have full, accurate visibility.
· The exposure of those assets based on how reachable they are, what they do, and what could realistically happen if they fail.
· The trend line on that exposure over time.
Nick’s background in banking, aerospace, and utilities lines up with what I have seen across industries: risk is business risk first. The technology changes, but the math is the same. If your asset data cannot tell you what a device does for the business, you are not doing risk quantification. You are just sorting a vulnerability list.
Compliance-Driven OT Security Is Quietly Becoming a Liability
I have a lot of respect for what frameworks like NERC CIP have done for the industry. Without them, many organizations would never have started investing in OT security at all.
But there is a pattern I keep seeing that worries me.
The more mature an organization becomes in its compliance program, the more likely it is that leadership starts to equate passing the audit with having risk under control.
Here is what that looks like on the ground:
· Compliance scope covers only a slice of the OT footprint, but that slice gets all the attention and budget.
· Separate tools and workflows exist for compliance that nobody touches outside audit season.
· Boards get reports full of control statuses and deficiency counts, but almost nothing about actual operational risk.
Nick and I both see this as the industry version of cargo cult security. Lots of activity, lots of documentation, not a lot of reduction in real-world risk.
Compliance should be a by-product of a good risk program, not the other way around.
From Vulnerability Management to Exposure Management
Almost every OT program I touch is drowning in vulnerability data.
Tens of thousands, sometimes millions, of findings. Different scanners. Different scoring systems. Different owners. Everybody knows you cannot patch your way through that mountain in a straight line.
The question that actually helps you move forward is not “How many vulnerabilities do we have?”
The question is “Where are we most exposed right now?”
To answer that, you have to combine:
· Asset criticality - what the device does for the business.
· Reachability - how easy it is to get to from the outside or from other compromised systems.
· Compensating controls - segmentation, monitoring, and safety systems that will or will not catch bad things early.
· Threat activity - what is actually being exploited in the wild.
When you put those pieces together, you can start to assign exposure scores that mean something. That is when patching, compensating controls, and project prioritization actually become strategic instead of reactive.
One thing I appreciated in my discussion with Nick is how aligned we were on this shift. He comes at it from years of risk and GRC work; I come at it from years in plants and OT networks. Both paths lead to the same conclusion: if you are still managing vuln counts instead of exposure, you are going to lose the race.
The Part Nobody Is Ready For: Institutional Knowledge Is Walking Out the Door
Let me get personal for a second.
My father spent more than 40 years in the power utility world. He knows the plants, the cables, and the control rooms in a way that never shows up in a Visio diagram.
Even in retirement, companies still call him. Why?
Because when something strange happens, they want the person who remembers why that relay was wired the way it is, or why that panel label does not match the documentation.
I see that same pattern everywhere I go. Senior operators and engineers who have kept systems running for decades are retiring, and most organizations do not realize just how much invisible risk that creates.
Here is the uncomfortable truth:
If your asset inventory and risk model do not capture what those people know, you are building your program on a foundation you cannot see.
You might have beautiful spreadsheets, a fancy dashboard, and a thick risk register, but if they do not reflect how the plant actually works, you are managing an imaginary environment.
What I Tell CISOs and OT Leaders To Do First
If you are a CISO, OT security lead, or plant manager, here is where I recommend starting:
1. Put a real number on asset visibility.
Do an honest assessment. Use discovery tools, but also walk the floor with operators. Do not stop at “we think it is 50 percent.” Measure it.
2. Add business context to your inventory.
For each key asset, document what it does, who owns it, and what happens if it fails. Even a one-sentence description is better than nothing.
3. Define simple criticality tiers that everyone understands.
Low, medium, high is fine to start. The important part is consistency, not perfection.
4. Link vulnerabilities and controls to that context.
Do not treat every finding as equal. Focus on exposure: critical asset, easy path, no compensating controls, active threat.
5. Report to the board in business language.
Talk about reduction in exposure on critical processes, not just patch counts. Show where you are investing and how that maps to fewer and smaller bad days.
How I Frame This at the Board Level
When I am in a boardroom, I usually boil everything down to three questions:
1. How confident are we in our visibility of the assets that actually generate revenue or keep people safe?
2. When we cannot fix everything, how do we decide what to fix first?
3. How do we know whether we are materially safer this quarter than we were last quarter?
If you cannot answer those three questions with data your board can understand, you do not have an OT risk management program yet. You have a collection of tools and spreadsheets.
The good news is that this is fixable. It starts with visibility, adds business context, then builds toward continuous risk quantification that matches how your organization actually makes decisions.
Want the Full Conversation?
If you want to hear the full back-and-forth with Nicholas Friedman, including stories from utilities and other critical infrastructure operators, listen to:
In the meantime, here is the question I will leave you with:
If you had to give your board a single percentage for OT asset visibility tomorrow morning, what number would you be willing to say out loud?

