(why AI makes this CEO-critical ability more valuable than ever)
For the last six months, your organisation has been buzzing with AI.
Every VP came to you with a new, data-backed, AI-polished plan.
Brilliantly argued with pixel-perfect presentation
(if a little lacking in emotion… funny that).
Finance has conclusive data on where to cut.
Marketing has a brilliant model for where to grow.
Ops proudly unveil their uber-optimised supply chain.
This is the new trap for the modern leader: a surplus of analysis has created a deficit of clarity.
For decades, we valued the acquisition and synthesis of information. The “smartest person in the room” was the one with the best data.
Now, that part of the value chain is being rapidly commoditised. AI can read 10,000 documents, run 1,000 scenarios, and write a compelling report on all of them before you’ve had your first coffee.
Analysis is becoming “too cheap to meter.” When analysis is free, the bottleneck moves up the value chain to judgment.
And if there’s one thing that’s been central to the role of leadership since homo sapiens started hanging out in groups, it’s judgement.
Resources have always been dwarfed by needs and opportunities. Leadership is about making the tough calls, often (especially) when it’s far from clear what the ‘right’ choice is. Often with imperfect data and restricted options.
In a recent analysis, respected AI strategist Nate B. Jones laid out 10 principles of human judgment that AI cannot replicate.
Together they’re 90 of the job description for 21st-century leadership.
The NBJ Judgement-10
Watch Nate’s video here
1. The Scarcity Principle: Identifying what is truly scarce and valuable when intelligence is abundant.
2. The Context Principle: Fusing pattern recognition with a deep understanding of what makes a situation unique.
3. The Constraint Principle: Moving beyond mere analysis to judge what is actually possible to execute right now.
4. The Sequencing Principle: Ordering bets and actions correctly to build momentum and ensure success.
5. The Deprioritisation Principle: The strategic and explicit decision of what not to do.
6. The Calibration Principle: Compounding judgment by creating tight feedback loops to learn what works and what doesn’t.
7. The Coalition Principle: Mapping the social graph of decision-makers and sequencing their buy-in.
8. The Responsibility Principle: Taking ultimate accountability for outcomes, which a tool never can.
9. The Transparency Principle: Building trust by transparently showing your reasoning and trade-offs.
10. The Compounding Principle: Encoding your judgment into scalable systems and playbooks that last.
Let’s break down what this framework really means for your role.
1. The Strategist: AI Can’t Choose
As The Strategist, your primary function is no longer to find the answer, but to select it.
AI is an expansion engine, generating a surplus of ‘correct’ analyses. Your judgment acts as the reduction valve. You must apply the Context that AI lacks – the unwritten rules of your culture, your brand’s unique ‘permission’ to act, and the nuances of your market position. This context allows you to identify true Scarcity, sensing the opportunities AI misses because they aren’t in its dataset.
Finally, you must execute the most critical strategic act: Deprioritization. While AI generates endless possibilities, you must be the single voice that says ‘no,’ making the hard sacrifices that define a true strategy.
2. The Architect: AI Can’t Build in the Real World
As The Architect, you are the chief reality officer, translating AI’s ‘perfect world’ plans into real-world execution.
Your judgment begins by applying Constraints, ruthlessly mapping the theoretical ideal onto the practical realities of your limited capital, time, and talent. From there, you must determine the high-stakes Sequencing of actions, as a brilliant idea executed in the wrong order is a failed idea. To make this execution scalable, you must apply Calibration, building the tight feedback loops that allow the organization to learn.
This is how you Compound your judgment—by taking those learnings and encoding them into playbooks and systems that create durable leverage.
3. The Leader: AI Can’t Be Accountable
As The Leader, you perform the functions that are permanently and exclusively human.
An AI is a tool; it can never be responsible. Your first task is to build a Coalition, mapping the complex social and political graph of your stakeholders and sequencing their buy-in. The currency you use to do this is Transparency—building the trust required for people to follow you by showing your reasoning and the trade-offs you’ve made, which an AI cannot do.
Finally, you are the ultimate firewall: Responsibility. In a world of probabilistic AI outputs, you must be the single point of accountability, owning the outcomes in a way a tool never can. This is the function that cannot be automated.
AI amplifies the impact of your good judgment and fatally exposes the lack of it. It commoditises the “what” and “how,” leaving you with “why” and “when.”
Your job is safer than it’s ever been.
It’s just also harder, and more valuable, than ever before.