Are your early careers metrics fit for purpose?

About the Author

Ruairi O’Donnellan is the Head of Marketing at Intuition. Intuition is your end-to-end strategic learning partner, helping you identify, design, and deliver the knowledge and skills your teams need to succeed. The perspectives in this article are informed by discussions with Intuition colleagues who work closely with risk teams across global financial institutions, as well as by ongoing delivery of risk capability and problem-solving programs.

***

When you look across most financial institutions today, it would be hard to argue that early careers programs are not measured. There are dashboards, there are summary reports, there are completion statistics and assessment results and satisfaction scores, all of which are reviewed with reasonable frequency, often with a sense that the program is being run properly, that it is structured, monitored, and therefore broadly under control.

And because similar metrics are used across the industry, and because peers tend to report in comparable ways and benchmark against similar indicators, it can begin to feel as though aligning to that standard is, in itself, evidence that the measurement framework is sound, but perhaps we need to start challenging this assumption.

The question that lingers, and which is not always asked directly, is whether those metrics are actually telling you what you need to know, or whether they are simply telling you what has traditionally been easy to measure, and as a result, has had the most uptake.

The likelihood is that your early careers program is aligned with the industry norm and is still operating at the mean, still leaving deeper insight unexplored, still relying on indicators that confirm activity rather than illuminate capability.

***

Other articles in this series:

***

Table of contents

Traditional metrics: Useful, but only part of the picture

For a long time, early careers programs have been evaluated through a relatively consistent set of lenses.

Attendance rates, completion percentages, exam pass marks, participant feedback, delivery against budget, all of these measures served a purpose. They confirm that the program ran, that participants engaged, that core concepts were tested, that the overall experience was broadly positive. In more centralized, less complex environments, that level of confirmation often felt sufficient.

And yet, when you pause for a moment and look at what those metrics actually represent, you begin to see their limits.

Attendance confirms presence, but not depth of understanding. Completion confirms exposure, but not integration. A pass mark confirms recall at a point in time, but not necessarily judgment under pressure. Satisfaction scores reflect how participants felt about the experience, which matters, but they do not necessarily reveal whether capability has meaningfully formed.

None of this makes traditional metrics wrong. They are, of course, necessary, and in many cases, they are well executed. The issue is not that they exist, but that they are sometimes treated as the primary indicators of success, rather than as one layer within a broader measurement framework.

By digging deeper into the program, and interrogating the metrics, you can develop a greater understanding of program success.

early careers programs from Intuition

The environment did not stand still

If we zoom out for a moment, it becomes easier to understand how we arrived here.

In the early 2000s, many graduate programs were heavily classroom-based, often centralized, operating in a context where access to structured financial knowledge itself carried weight. Measurement focused naturally on attendance and examinations, because ensuring exposure to core material was the main objective.

As the 2010s unfolded and institutions expanded globally, as regulatory expectations intensified and blended learning models became more common, measurement broadened, incorporating more structured assessments and feedback mechanisms became easier, yet these systems were still largely centered on confirming delivery and testing recall.

Then the environment shifted again. Cross-border collaboration became routine rather than exceptional, regulatory scrutiny remained persistent, capital discipline tightened, and the rise of AI compressed access to information in a way that subtly changed the value equation around knowledge itself. What once differentiated professionals, the ability to recall information, became less scarce, while the ability to interpret, apply, and escalate with sound judgment became more critical. Human skills and human judgment are now one of the major development areas for organizations as they look to develop a competitive advantage.

At the same time as all these developments, cost scrutiny increased, leadership expectations around return on investment sharpened, and early careers programs began to sit more visibly within broader conversations about operational readiness and risk alignment.

The context evolved steadily, and yet in many cases the underlying measurement logic moved only slightly.

Traditional early careers metrics are very good at confirming that something happened, that people attended, that they passed, that they rated the program positively.

The insight gap

What this creates is not a failure of oversight, but a gap in insight.

Traditional early careers metrics are very good at confirming that something happened, that people attended, that they passed, that they rated the program positively. And that’s great – it gives a surface level understanding of the success of the program, but there’s a deeper level of understanding we can generate if we interrogate the information we have in an intelligent way.

What traditional metrics are less effective at revealing is where capability is forming unevenly, whether one region is conceptually lagging another, how applied performance varies across cohorts, or which specific concepts are consistently weak when tested in realistic scenarios.

If applied decision-making is inconsistent across streams, would your current dashboards surface that clearly? If escalation judgment differs materially between regions, would that be visible in a way that prompts early intervention? If participants understand regulatory frameworks in theory but struggle to apply them under simulation or in discussion, would your reporting capture that nuance, or would it remain implicit until surfaced later in the role?

These are not dramatic breakdowns. They are quieter patterns, and without more structured analytics they can remain difficult to detect. The limitation, therefore, is not in effort, but in depth. It is the difference between knowing that a program ran successfully and understanding precisely how capability is progressing within it.

What does a future-ready financial risk function look like?

What modern early careers metrics look like

In response to this growing complexity, measurement thinking is beginning to evolve. More advanced programs are layering additional forms of analysis on top of traditional indicators, moving beyond point-in-time testing toward more longitudinal, capability-oriented insight.

This can involve establishing clearer baselines through diagnostic assessments before and after programs, tracking concept mastery over time rather than relying on a single exam, incorporating simulation-based performance scoring to assess applied decision-making in more realistic conditions, and comparing performance across cohorts and regions to identify patterns that would otherwise remain hidden. It can also mean generating manager-facing reports that translate learning data into operationally meaningful signals, rather than simply summarizing attendance and feedback.

The emphasis in these approaches is subtle but significant. The question shifts from “Did they complete the module?” to “How did they perform when required to integrate and apply the concept?” And this is the key point we are trying to make; when the knowledge is applied in practice, does it work? Is it impactful? Is it making the employee better at what they do? Or is it just sitting there idle within your employee, not being applied in a real-world environment?

Traditional metrics do not disappear in this model. They do remain part of the picture, but the difference is that they are no longer the only lens through which success is judged.

When financial systems fail: Key lessons for risk leaders

At its core, what we are seeing is a gradual movement from measuring training activity toward generating capability insight, from confirming program delivery toward building operational confidence.

The underlying shift

At its core, what we are seeing is a gradual movement from measuring training activity toward generating capability insight, from confirming program delivery toward building operational confidence. In 2026, where global dispersion, regulatory intensity, and technological acceleration define the environment, the assurance provided by surface-level metrics may not always be enough.

Early careers programs shape how professionals interpret capital, risk, liquidity, compliance, and escalation from the very beginning of their careers. The data generated within those programs, therefore, carries strategic significance. The question is not whether you are measuring your program, but whether your measurement framework is calibrated to the complexity of the world your intake is entering.

And so the reflection becomes a simple one.

If capability began to drift within your early careers program over the next year, if applied understanding varied meaningfully across regions or streams, would your current metrics surface that insight early enough to act, or would they continue to reassure you that completion rates remain comfortably high?

early careers programs from Intuition

Key takeaways

  • In 2026, early careers programs operate in a structurally more complex environment, shaped by global dispersion, regulatory intensity, AI-driven information abundance, and heightened scrutiny around measurable outcomes.
  • Traditional metrics such as attendance, completion rates, pass marks, and satisfaction scores remain necessary, but on their own they provide confirmation of activity rather than deep insight into capability formation.
  • The core risk is not a lack of measurement, but a lack of depth. Surface-level metrics confirm that a program ran, yet they do not always reveal how capability is progressing beneath the surface.
  • As institutional complexity increases, so too must the sophistication of early careers analytics. Leaders require insight into applied readiness, not just participation and recall.
  • Modern measurement approaches are evolving toward diagnostic benchmarking, longitudinal concept tracking, simulation-based performance scoring, cross-cohort comparison, and manager-facing reporting that translates learning data into operationally meaningful signals.
  • The shift underway is philosophical as much as technical, moving from training metrics toward capability analytics, and from reporting delivery toward generating operational confidence.
  • Because early careers programs shape long-term judgment, risk interpretation, and escalation behavior, the depth and intelligence of measurement applied in those early months has a compounding impact over time.
  • The real question is not whether your early careers program is measured, but whether your current metrics are calibrated to the demands of the environment your graduates are entering.

Intuition launches new retail credit and AI tutorials in Know-How

Frequently asked questions

Why are traditional early careers metrics no longer enough on their own?

Traditional metrics like attendance, completion, pass marks, and satisfaction confirm that a program ran and that people engaged, but they mostly validate activity. They do not reliably show whether knowledge has been integrated, whether judgment holds under pressure, or whether capability is forming evenly across a cohort. The problem is not that these measures exist, but that they are often treated as primary proof of success.

What do attendance and completion rates actually tell you?

Attendance confirms presence, not depth of understanding. Completion confirms exposure, not integration into real work. These indicators can be useful as one layer in a broader framework, especially for confirming program delivery. But if you stop there, you risk assuming the program is working simply because participation is high, even if applied performance and decision-making vary widely across people, regions, or streams.

Why can exam pass marks be misleading for early careers programs?

A pass mark can show recall at a point in time, but it does not necessarily demonstrate sound judgment under pressure. Participants may understand frameworks in theory and still struggle to interpret, apply, or escalate correctly in realistic conditions. As access to information has become easier and faster, especially with AI, the differentiator shifts from memorization toward interpretation and application, which traditional exams may not fully capture.

What is the insight gap in early careers measurement?

The insight gap is the difference between knowing a program ran successfully and understanding how capability is progressing within it. Traditional dashboards are strong at confirming that participants attended, passed, and rated the experience well. They are weaker at revealing uneven capability formation, regional or cohort gaps, persistent weak concepts under realistic testing, or inconsistent escalation judgment. These patterns can be quiet and difficult to detect without deeper analytics.

How has the environment changed since the early 2000s for early careers programs?

Graduate programs moved from centralized, classroom-heavy models toward more globally distributed and blended learning environments. Regulatory expectations intensified, cross-border collaboration became routine, and cost scrutiny increased. AI also compressed access to information, reducing the advantage of simple recall and increasing the importance of interpretation, application, and judgment. Leadership expectations around ROI and operational readiness have sharpened, but measurement logic has often evolved only slightly.

What kinds of capability issues do traditional metrics tend to miss?

Traditional metrics may miss where capability is forming unevenly across a cohort, whether one region is conceptually lagging another, and how applied performance varies across streams. They can also fail to surface consistent weak points when concepts are tested in realistic scenarios, or differences in escalation judgment between regions. Without structured, capability-oriented analytics, these issues may remain implicit until they show up later in role performance.

What do modern early careers metrics look like in practice?

Modern measurement layers capability insight on top of traditional indicators. This can include diagnostic assessments before and after a program to establish baselines, tracking concept mastery over time rather than relying on one exam, and simulation-based scoring to assess applied decision-making in realistic conditions. It also involves comparing performance across cohorts and regions to identify patterns, and producing manager-facing reports that translate learning data into operationally meaningful signals.

What is the underlying shift in how early careers success should be judged?

The underlying shift is from measuring training activity to generating capability insight, and from confirming delivery to building operational confidence. The key question moves from “Did they complete the module?” to “How did they perform when required to integrate and apply the concept?” Traditional metrics still matter, but they become one lens within a broader framework that is calibrated to global dispersion, regulatory intensity, and technological acceleration.

New call-to-action