The Future with Super AI: Technical Vision and Non-Consensus

Here's my one-line thesis: as AI becomes more capable, the ability
to understand, formulate, and articulate vision becomes the
differentiating human skill. Technical know-how depreciates the most.

Technical Cognition vs. Technical Skill

There's an important distinction between technical skill and
technical cognition. Skill is knowing how to do something—the
specific mechanics of execution. Cognition is knowing what's
possible and how to adapt—the meta-level understanding of systems.

Technical skill is likely the most depreciating asset in the AI
era. The ability to write code, configure systems, or execute
processes will increasingly be delegated to AI. But technical
cognition—understanding how systems work at a fundamental level,
seeing what's possible, knowing when and where to apply leverage—
this remains valuable. Perhaps more valuable than ever.

In the near future, everyone becomes a CEO and product manager.
Each person commands an army of expert advisors, strategists, and
engineers—all AI. But seeing the vision? That still requires your
own judgment.

Claude Code as an Example

Consider Claude Code. This tool fundamentally changes what's
possible for someone with the right cognition. It lets you operate
across multiple layers of the stack—especially hardware—without
needing the specific skills that previously gatekept access.

Claude Code removes technical skill as the bottleneck. What
remains? Imagination. First-principles thinking. The ability to see
what factors make something possible, then direct AI to execute.

If you can't see it, you can't do it.
If you can see it, you can do it.

The manifestation of cognition is the awareness to do something.
Some people "see" that Claude Code can help with a particular
problem. Others don't. This gap in technical cognition may be
unbridgeable—unless AI moves from the passenger seat to the
driver's seat, at which point the dynamic changes entirely.

Why Deep System Understanding Still Matters

I've been exploring kernel development lately. Not particularly
skilled at it yet, but it's fascinating. One observation: the
deeper you go in the stack, the harder it gets—and the more it
reflects accumulated expertise. This echoes Zhang Yiming's view
that difficulty at lower layers represents genuine moats.

But there's another reason to go deep: stripping away abstraction
lets you see more clearly and hack systems more effectively. You
might also see things others don't—non-consensus insights that
emerge from understanding what's actually happening beneath the
abstractions.

Deep system understanding creates the possibility of seeing
futures that others cannot see. This is the source of non-consensus.

Two Futures: Consensus or Non-Consensus?

I've been thinking about what happens when super AI arrives. Two
scenarios seem possible:

The consensus future: Super intelligence means everyone has
access to equivalent reasoning power. Given the same intelligence
applied to the same problems, everyone reaches the same conclusions.
Disagreement becomes irrational. Prediction converges.

The non-consensus future: Even with maximum intelligence,
prediction has fundamental limits. Uncertainty persists. Different
visions remain possible because the future is genuinely open.
Non-consensus continues to exist—and continues to matter.

I hope for the second future. A world where non-consensus still
exists is a world where individual vision, judgment, and courage
still matter. Where seeing something others don't can still create
value. That seems like a more interesting future to me.