Premium Insight

Key risk indicators of operational risk (KRIs): What to track and what to ignore

Coastal lighthouse providing early warning signals across the water at twilight.
Share this post
Get a personalized tour of Semantic Visions

Talk to our experts and discover how real-time data insights can support your business.

Book a Demo

Your operational risk framework covers people, processes, and systems. So why do the largest losses of the last decade like Archegos, Wirecard, Nexperia, Fisker keep coming from the fourth Basel category that most dashboards treat as unmeasurable? A practical guide to KRIs that lead, KRIs to retire, and the external-event gap hiding in plain sight.

Most operational risk dashboards track the wrong metrics

Most operational risk dashboards track dozens of KRIs. Many track hundreds. And yet the losses that actually reshape a quarter, LIBOR, Wirecard, Archegos, Silicon Valley Bank, Nexperia's recent governance rupture, the Fisker supplier cascade, virtually none of them showed up in anyone's KRI heatmap until it was far too late.

That isn't a coincidence. It's a design flaw.

The operational risk function is drowning in indicators that measure what's easy to count, not what predicts loss. And while risk teams are quarterly calibrating threshold bands on ticket volumes and failed trades, the actual exposure is accumulating in a dimension they don't monitor at all.

This article does two things most KRI literature avoids. First, it names the specific indicators every operational risk team should track, categorized by Basel's four-source definition. Second, and more importantly, it names the indicators to retire immediately, because they're consuming attention without producing signals.

What a KRI actually is and what it isn't

A Key Risk Indicator is a metric designed to give early warning that a specific risk exposure is increasing. That's it. Not a dashboard decoration, not a post-incident scorecard, not a regulator placeholder.

Three distinctions matter, and most operational risk dashboards get at least one of them wrong:

  • KRI vs. KPI. A Key Performance Indicator measures how well something is working. A KRI measures how close something is to breaking. Revenue-per-employee is a KPI. Turnover rate in critical control functions is a KRI.
  • KRI vs. KCI. A Key Control Indicator measures whether a specific control is functioning. Reconciliation exception rate is a KCI. The trend in reconciliation exceptions, rising fast enough to signal a process weakness, is the KRI.
  • Leading vs. lagging. A leading KRI moves before the risk event. A lagging KRI moves after. Loss event count is lagging. Unusual overtime concentration on a single trading desk is leading.

The operational risk community has known this since the Basel Committee codified operational risk capital requirements in 2004. In practice, most dashboards have drifted into lagging territory anyway, because lagging data is cleaner, auditable, and doesn't require judgment calls. That drift is the problem.

The Basel four-source model and the one everyone ignores

Basel II and Basel III define operational risk as:

"The risk of loss resulting from inadequate or failed internal processes, people and systems or from external events."

Read that carefully. Four sources. Not three.

  • People: human error, fraud, unauthorized activity, skill gaps, key-person dependency
  • Processes: control failures, reconciliation breaks, execution errors, model risk
  • Systems: IT outages, cyber incidents, infrastructure fragility
  • External events: supplier and counterparty failure, regulatory action, geopolitical disruption, infrastructure breakdowns outside your perimeter, reputational damage carried through media and public discourse

The standard KRI dashboard (whether in a global bank, a mid-sized insurer, or a multinational corporation) typically covers the first three in depth. External events are either absent entirely, reduced to a handful of static country-risk scores, or outsourced to "reputation monitoring" that isn't wired into the operational risk framework at all.

This is backwards. A quick tour of the last decade's largest operational risk losses makes the pattern uncomfortable:

  • Archegos (2021). Credit Suisse lost $5.5bn not from an internal systems failure, but from counterparty concentration that external signals had been flagging for months.
  • Wirecard (2020). Third-party acquirer relationships and offshore round-tripping were visible in public filings, press coverage, and short-seller research years before BaFin acted.
  • Nexperia (2025). A governance rupture between corporate entities in different jurisdictions halted shipments to European automotive customers — an external event, not an internal systems failure.
  • Fisker supplier cascade (2024). Tier-1 supplier distress signals were visible in news cycles and regulatory filings months before Fisker's own liquidity crisis became public.

In every case, the operational risk function had indicators. It was just looking at the wrong part of the perimeter.

KRIs worth tracking by risk source

The following list isn't exhaustive, but every item is a genuine leading indicator: it moves before the loss event, and it's actionable. If your dashboard doesn't have at least one strong KRI in each category, you have a structural blind spot.

Risk Source KRI Why It Leads
People Turnover rate in critical control functions Rising attrition in compliance, risk, and reconciliation roles precedes control failures by one to two quarters
People Overtime concentration on a single desk Sustained overtime is the classic precursor to rogue activity; both Kerviel (Société Générale) and Leeson (Barings) showed this signal before collapse
People Unused mandatory-leave days in trading and treasury Forced leave is itself a rogue-trading control; violations signal that the framework is eroding
Processes Reconciliation break aging, not count Volume of breaks is noise; breaks that stay open past 30 days are signal
Processes Manual override rate per business line Rising overrides mean the process is decaying faster than it's being fixed
Systems Critical-system change failure rate Failed changes predict incident clusters within 60 days
Systems Third-party SLA deviation trend Individual misses are noise; consistent drift across multiple vendors signals systemic dependency risk
External events Counterparty and supplier distress signals (OSINT) News, filings, and litigation clusters precede visible financial distress by weeks to months
External events Regulatory action velocity in peer institutions Enforcement trends on peers predict your own exposure
External events Geopolitical disruption velocity in concentration markets Tracks rate of change — not static country scores
External events Non-English media narrative shifts on key counterparties Tone and volume changes in local-language press often lead English coverage by days to weeks

Notice the asymmetry. Mature frameworks have roughly a dozen KRIs in each of the first three categories. In the fourth, they typically have one or none. That gap is the story of the next decade of operational risk losses.

KRIs to ignore (or retire immediately)

This is the section that gets skipped in every other KRI guide, and it's the one that will do the most work for your dashboard. Every one of the following shows up constantly in operational risk reporting across banks and corporations. Every one should be under review.

1. Loss event count. Not a KRI. It is a lagging measure by definition. The event has already happened. Useful for trend analysis and capital modeling. Useless for early warning. Move it to a separate loss database view, not the forward-looking dashboard.

2. Open audit findings. Also lagging. These describe what the internal audit function has already discovered. They don't tell you where the next issue is forming. Tracking them as a KRI inverts the purpose of the dashboard.

3. Attempted phishing emails received. This is cybersecurity vanity data. The inbound volume tells you almost nothing about exposure. The meaningful signals are successful intrusion attempts, lateral movement anomalies, and privilege escalation deviations not inbound noise that your filters already catch.

4. KRIs with thresholds that have never been triggered. If a threshold has been stable for two years without producing an action, one of two things is true: the threshold is wrong, or the metric is irrelevant. Either way, it's consuming real estate in the dashboard and attention from the committee.

5. KRIs without an assigned action owner. If nobody's job description says act when this metric moves, it isn't a KRI. It's a wall decoration. These proliferate in dashboards that were built for regulatory display rather than risk management and regulators themselves have been pushing against this for a decade.

6. Duplicative KRIs across risk, compliance, and audit. The same underlying data point routinely gets rebranded three different ways across three different functions. This doesn't triple visibility. It triples reporting overhead, dilutes accountability, and produces committee discussions where nobody remembers who owns the action.

7. Point-in-time KRIs reported quarterly. In 2026, any indicator reviewed quarterly is, definitionally, not early warning. It's a historical report. The operational risk universe now moves on a daily, sometimes hourly, timescale. Quarterly cadence is a legacy of the era when data collection was the bottleneck. It is not anymore.

Most operational risk functions that run this filter honestly retire 30 to 40 percent of their KRI portfolio on the first pass. The remaining set, properly thresholded and backed by external-event monitoring, outperforms the pre-audit dashboard on almost every meaningful criterion including regulatory conversations.

The missing KRI class: external-event indicators

This is the category Basel explicitly named in 2004 and that has been systematically underinvested in ever since. It is also where the most recent wave of major operational risk losses has concentrated and where the biggest remaining alpha sits for any institution willing to build the capability.

External-event KRIs are signals about the world outside the institution's perimeter that predict operational risk exposure. They include:

  • Counterparty and supplier distress signals: litigation filings, leadership departures, payment delays reported in specialist press, audit-firm resignations, factory incidents, labor actions
  • Regulatory action velocity: enforcement pattern changes in the regulator's peer universe
  • Concentrated media narrative shifts: localized sentiment divergence, negative tone in local-language press, pre-break reporting patterns in non-English media
  • Corporate structure changes: sudden subsidiary creation in sanctioned jurisdictions, beneficial ownership changes, offshore relocations (the Wirecard pattern, the 1MDB pattern)
  • Geopolitical disruption clusters: not static country scores, but rate-of-change in event frequency affecting your physical, contractual, or counterparty footprint

The reason most institutions don't track these isn't that they don't want to. It's that the data is unstructured, multilingual, and arrives at a velocity manual monitoring cannot process. Three or four analysts cannot read the world in real time. And the tools that claim to do it, generic media monitoring, tier-one news aggregators, country risk scoring products. Typically either drown teams in noise or compress the signal into scores that are uselessly abstracted.

This is exactly the gap Semantic Visions aims to close. Its platform continuously monitors roughly over 2M news sources in over 12 languages, tracking more than 18 million companies across 7 risk dimensions. It deduplicates and semantically clusters events, then surfaces the specific signals that materially change operational risk exposure on a named counterparty, supplier, or geography. Fisker's supplier distress was visible in Semantic Visions feeds weeks before the company's own liquidity crisis reached mainstream financial press. Wirecard's third-party acquirer irregularities were traceable in non-German press clusters long before BaFin finally moved.

The point isn't that one vendor solves the problem. The point is that external-event KRIs are now technically addressable at scale. Institutions still treating this category as unmeasurable are making a 2015 assumption in a 2026 operational environment.

Time-to-detection: the meta-KRI nobody talks about

Here's a question worth running across any operational risk function: on the last material operational loss your institution booked, how long was the gap between the earliest available external signal and the first formal briefing to your risk committee?

In most institutions that have done this exercise honestly, the answer is measured in weeks. Sometimes months.

That gap is itself a KRI and arguably the most important one that most institutions don't track. It measures the health of the entire risk intelligence apparatus, not one specific exposure. A 60-day detection lag is not a dashboard problem. It is a systemic one.

How to measure it:

  • On each material risk event, log the date of the earliest verifiable public signal
  • Log the date of the institution's first formal awareness
  • Track the delta as a trailing metric across every event in the reporting year
  • Set an improvement target, and own it at the committee level

Institutions that have implemented this metric report that the act of measuring it, more than any specific tool deployment, is what moves the number down. The cliché holds: you can't improve what you don't measure.

How to audit and retire your KRI portfolio

A four-question test, applied to every KRI currently in the dashboard:

  1. Signal-to-noise test. In the last 12 months, has this metric moved in a way that triggered a meaningful management action? If not, it is either mis-thresholded or irrelevant.
  2. Action-ownership test. Is there a named individual whose job responsibility is to act when this metric moves? If not, it is reporting, not risk management.
  3. Threshold-validity test. Was the threshold last calibrated under current business conditions? Thresholds set in 2019 are not valid in 2026.
  4. Predictive-lag test. Does the metric move before the risk event, or after? If after, it belongs in the loss database, not the KRI dashboard.

A clean portfolio after this audit typically has 40 to 60 indicators rather than 150 to 300, each mapped to an owner, each carrying a threshold validated within the last 12 months, each covering one of Basel's four sources including external events.

From periodic dashboards to continuous risk telemetry

The deeper shift underneath all of this is that operational risk management is migrating from a periodic discipline to a continuous one. KRI dashboards refreshed monthly or quarterly are vestiges of a world in which data collection was expensive and slow. That world ended around 2015. Institutions still operating under its assumptions in 2026 are, structurally, late.

The new operating model has three properties:

  • Event-based, not cycle-based. KRIs update when signals arrive, not when reports are due.
  • External as first-class, not afterthought. External-event monitoring sits inside the operational risk framework, not next to it.
  • Time-to-detection as the organizing KPI. The whole system is tuned to compress the gap between emerging signal and management action.

This is where operational risk management is going, whether individual institutions move with it or not. The ones that move first will spend the next five years compounding a lead-time advantage on everyone else. The ones that don't will keep producing excellent backward-looking dashboards of exposures they failed to see forming.

See risk before it's news.

Close the external-event gap in your operational risk framework. Semantic Visions maps external-event KRIs for banks and global corporations across 300,000+ sources in 12+ languages in near real time. Tracking 18 million companies across six risk dimensions, with lead-time over mainstream financial press measured in weeks, not days. 

Talk to us about building your external-event KRI layer

Access Premium Insights

By submitting your email, you agree that Semantic Visions may contact you with relevant business communications. You can opt out at any time.
Thank you! Your submission has been received and insights will be unclocked shortly!
Oops! Something went wrong while submitting the form.

Related articles

Small commercial drone flying over a vast mountain range with snowy peaks, illustrating modern drone technology used for environmental monitoring or infrastructure inspection.

Drone supply chains and the China dependency: Why real-time intelligence matters

Prague cityscape with illuminated bridges over the Vltava River and the Old Town bridge tower, host city for the ISS Europe event where Semantic Visions COO Julius Rusnak will present insights.

Semantic Visions at ISS World Europe 2026: Navigating the new era of intelligence

Solar panel array showing renewable energy supply chain vulnerability requiring real-time OSINT monitoring and risk intelligence

Why Generative AI Alone Is Not Enough for Risk Monitoring

See Everything. 
Focus on What Matters.

svEye™ filters the noise to uncover meaningful patterns and insights. Gain clarity, stay informed, and drive smarter decisions with a comprehensive overview.