The gap between theory and reality.

Most AI automation forecasts count digital tasks and call it exposure. They ask one question: does this task involve a computer? If yes, it's flagged as automatable.

We count three layers. What goes in and out — digital or physical. What informal knowledge the task depends on — accumulated context, unwritten norms, tacit pattern recognition that lives in people rather than prompts. And what social function the work serves — whether the act of doing it is load-bearing in ways the output alone isn't.

The result: across the professions in this dataset, the average gap between theoretical and realistic exposure is 27 percentage points. The theoretical ceiling averages 58%. The realistic floor: 31%.

Naive vs. effective exposure — selected roles
Theoretical
Realistic
Junior Marketing Analyst 49% vs 85%
Copy Editor 57% vs 77%
Data Scientist 34% vs 78%
Chief Executive Officer 15% vs 27%
Registered Nurse (Emergency Room) 9% vs 24%
Welder 8% vs 17%

Effective exposure accounts for substrate depth and social function. Naive exposure counts only digital I/O — the single layer most analyses use.

Junior knowledge workers: the highest realistic exposure.

Junior knowledge roles score highest on effective exposure — not because the work is simple, but because it concentrates the conditions that AI operates best in: well-defined tasks, digital inputs and outputs, thin substrate (little accumulated context), and limited social function.

A junior analyst compiling a market summary from databases and reports has almost nothing protecting that task from automation — no institutional memory required, no relationship on the line. The work is almost entirely in the formal I/O layer.

The more productive framing is not threat, but leverage. These are also the roles where AI tools can amplify individual output most dramatically, where a single person supported by the right tools can do what previously required a team.

Why CEOs score among the lowest effective exposure.

Chief executives and senior leaders produce almost entirely digital outputs — strategy documents, email, decisions. On paper, that should register as high exposure. In practice, it registers as low.

The gap is the social layer. A CEO's strategy review is not just information transfer — it is the organisational signal about priorities, the moment where ambiguity gets resolved through authority. A board presentation is not just slides — it is a performance of confidence and competence that reassures investors. An executive hire is not just evaluation — it is a relationship and a bet on culture. Automating the document doesn't preserve the function.

Thick substrate compounds this. Senior roles accumulate context at high density — knowing which stakeholders need which framing, which historical decisions constrain the present, which unwritten rules govern what can be said. None of that is in the prompt.

The moat that doesn't need explaining.

Welders, dentists, and emergency nurses score 8–15% effective exposure. Not because the work is cognitively simple — a dentist performing a root canal is integrating tactile feedback, spatial reasoning, and real-time decision-making in ways that rival any knowledge task. An ER nurse is managing competing priorities under life-or-death time pressure.

The protection comes from the analogue layer. The work requires a body to be present. It begins and ends in physical reality. No amount of digital sophistication changes the fact that the drill needs to be held, the patient needs to be touched, the weld needs to be run.

Robotics changes this calculation at the frontier of research. For the working population, it remains unchanged. The moat is physical presence, and that moat doesn't need elaborate justification — it just is.

The outliers: genuinely automatable today.

Proofreaders and copy editors are real exceptions. Not edge cases, not cherrypicked examples — roles where the three-layer analysis lands squarely on high effective exposure because all three conditions are met simultaneously.

The input is text. The output is corrected or improved text. The substrate is primarily rule-based — grammar, style guides, consistency checks — with limited tacit component. The social function is essentially none: a proofreader's relationship to a document is technical, not relational.

The data doesn't soften this. These are roles where AI already does a substantial portion of the work recognisably well, and the remaining human value is thin enough that the economic case for displacement is straightforward. Acknowledging this clearly is part of what makes the lower scores elsewhere credible.

Thick substrate: what protects knowledge workers in tech.

Data scientists have naive exposure approaching 80% — almost all their work is digital-in, digital-out. Yet effective exposure sits around a third of that. The gap is substrate.

Knowing which model assumptions are wrong for a given dataset. Knowing which stakeholders need which framing for a given result to move. Knowing which data is dirty, which metrics are gamed, which previous analysis was methodologically weak. None of this is in the prompt. All of it is load-bearing.

Cloud architects carry similar protection — not because cloud infrastructure is opaque to AI tools, which can generate competent Terraform with minimal guidance — but because the right architecture for a given organisation depends on a web of constraints that are organisational, financial, political, and historical. The work of understanding those constraints is where the value lives, and that work doesn't compress into a specification.

These profiles are built from synthetic data — realistic task decompositions generated by AI, classified through our three-layer model. They're a starting point, not a final word. The real picture will only emerge as more workers describe their own jobs.

Map your own job →