Judgement vs Knowledge in the AI Era
Overview
As AI handles more routine tasks, the remaining human work shifts toward judgement: reviewing AI outputs, evaluating context, making high-stakes decisions. The consequences of every judgment are amplified by what an agentic AI workforce can execute on the back of it.
The Pattern
Every major technological transition reshuffles the hierarchy of human capabilities:
- Printing press → made literacy non-negotiable
- Electrification → elevated systems thinkers over machine operators
- AI → elevates judgement over knowledge, across virtually every sector
The Accountability Problem
Leaders implementing AI report the same challenge: the hardest problem is not the technology but maintaining accountability at the speed AI enables. AI creates fewer tasks per worker, but each task carries more judgement, higher stakes, and faster execution via agentic systems.
Historical Context
Herbert Simon (1971): “A wealth of information creates a poverty of attention.” This was theoretical then. Fifty years later, it describes the daily reality of knowledge workers navigating constant notifications and AI-generated content.
The Deeper Historical Pattern
Brynjolfsson & McAfee argue that the second machine age automates mental power just as the first automated physical power. Frey’s Enabling vs Replacing Technologies framework clarifies why this matters: AI is an enabling technology for those with strong judgement and a replacing technology for those without it. This is why bounty grows but spread widens — judgement-rich workers thrive, judgement-poor workers are displaced.
Polanyi’s The Double Movement predicts the political response: society will move to protect itself from the market’s commodification of cognition, just as it moved to protect itself from the commodification of labor in the 19th century.
Polanyi’s Paradox and the AI Inversion
Autor provides the economic mechanism connecting judgement to AI’s labour-market impact. Michael Polanyi observed in 1966: “We can know more than we can tell.” Tacit knowledge — making a persuasive argument, recognizing a face, adapting procedures to variable cases — resisted codification. Pre-AI computers excelled at explicit, rule-based tasks but could not touch work requiring tacit expertise. This is why computerization amplified the value of elite expert judgement: the tasks computers couldn’t do were precisely the ones requiring human judgement.
AI inverts this. It learns by example rather than following hard-coded procedures, acquiring capabilities it was not explicitly engineered to possess. AI can engage in something resembling expert judgement — a capability previously limited to elite professionals. This does not make judgement obsolete. Autor argues it makes judgement more widely deployable: AI can extend expert decision-making to workers with foundational training but not elite credentials (see Expertise Democratization).
The empirical evidence from the BCG study adds a critical nuance. Inside AI’s capability boundary, judgement requirements shift from “generate the answer” to “evaluate the AI’s answer.” Outside that boundary, judgement matters even more — the workers who cannot tell when AI has crossed from competence to confident error are the ones who perform worst. Dell’Acqua’s “falling asleep at the wheel” finding (see The Jagged Frontier) shows that high-quality AI can actually degrade judgement by reducing the need to exercise it.