02

Facet 02

AI & The Human

The tools are changing faster than we can think about what they mean. I use AI seriously, with open eyes and declared bias — and I find it genuinely expansive. But the bigger question isn't about the tools. It's about what it takes to remain a purposeful person while everything underneath shifts. That's what this facet is working through.

AI: A Foundational Position

Holding the Tension

There is a version of this essay that leads with optimism. Judgment becomes more valuable as execution gets cheaper. Abundant intelligence expands the economy. The people who set strategy, manage risk, and take responsibility for outcomes become more leveraged, not less. I can argue that case. I believe parts of it.

There is another version that leads with alarm. The information ecosystem is about to get exponentially worse. Trust is collapsing at civilisational scale. AI accelerates the dysfunction faster than institutions can respond. I can argue that case too. I believe more of it.

The tension between these two positions is real. I am not going to resolve it — and that refusal is itself the position.

AI is the most significant civilisational development of our era. It has the potential to expand what humanity can build, solve, and become. It also has the demonstrated capacity to accelerate existing dysfunction — in information, in trust, in the coherence of shared reality, in the architecture of conflict — faster than our institutions can respond. Avoiding the worst outcomes will require something close to a wartime mobilisation of collective will and capacity across multiple domains simultaneously. Not beyond us. But requiring all of us, performing significantly above our demonstrated historical average, at a moment when our capacity for collective action is under greater strain than at any point in living memory.


We Did Not Arrive Here in Good Shape

We did not arrive at this moment in good shape. The preconditions for AI's most damaging effects — division, eroded institutional trust, the fracturing of shared reality, the deliberate manufacture of confusion as a political instrument — were already in place before the technology arrived. AI did not create these conditions. It inherited them, and is now accelerating them at a pace and scale that existing social, political, and institutional structures were not built to absorb.

The information terrain is where I work, and it is where AI's corrosive effects are most immediately visible. But the dangers do not stop there. Autonomous weapons systems, the acceleration of biological and other asymmetric threats, the concentration of unprecedented capability in the hands of states and actors with competing interests, the erosion of the legal and institutional frameworks that have — imperfectly, unevenly — constrained conflict: these are not speculative concerns. They are already in motion. The question of whether we can respond adequately to any of this is not primarily a technological question. It is a question about whether fractured societies can find sufficient collective will — across difference, across borders, across competing interests — to act at the scale the moment requires. That is why I reach for the wartime analogy. Not for its drama, but for its precision: there are moments when the normal pace of adaptation is simply insufficient, and this is one of them.


The Terrain I Know Best

Within that broader landscape, the terrain I know best is the relationship between information, trust, and power — and it is here that the stakes are most immediately legible. When information is weaponised — when narratives are designed not to persuade but to exhaust, not to argue but to destabilise — the first casualty is not truth but the shared capacity to evaluate it. We have been watching this process accelerate for two decades. What AI adds is not a new dynamic but a dramatic amplification: more content, faster, cheaper, personalised to individual psychology, operating at a scale that makes verification practically inaccessible to most people.

The consequence is a transition from high-trust to low-trust conditions at civilisational scale. In a high-trust environment, you persuade with evidence, lead through expertise, argue with logic. In a low-trust environment, none of that works. Evidence is contested. Expertise is treated as bias. Logic is dismissed as agenda. Most organisations and leaders still operate as if credibility is a given. It is not.

AI will not fill that gap. Applied to a contaminated information environment, faster and more confident intelligence produces faster and more confident imperfect conclusions. The organisations likely to outperform in the next decade will not be the ones with the best AI implementations. They will be the ones whose leaders have developed the capacity to read the informal environment — to know what they don't know, to hold uncertainty without paralysis, to maintain judgment precisely when the tools are most confidently wrong.


How to Work Inside It

None of this resolves the question of how to work inside it — which is the question I face every day, and which I suspect most people working seriously in this space face too.

I know from direct experience that the garden grows. Each session, each small piece of work, each return to the problem — and over time, something extraordinary accumulates. I am working now at a pace and quality I could not sustain alone. Not because AI replaces judgment, but because it removes the dead ends. It clears the operational noise and returns me to what actually matters: the thinking, the framing, the decisions that require a human being with a formed point of view and the will to hold it.

That is not a rebuttal to the hard position. It is how I work inside it — using AI as instrument, never as substitute for the judgment that hard experience builds.

The deeper question — how to find strength, find your people, and develop the capacity to live in and even master absolute uncertainty — I approach from a different direction. From a scientific mindset that refuses magical thinking even when the stakes invite it. From a life philosophy that has been hard-earned across decades of practice, including the embodied practices of yoga philosophy not as belief but as instrument: a set of tools for thinking and acting clearly under conditions of noise, disruption, and pressure. That work runs parallel to this one, and will surface in its own place. What I will say here is only that clarity about what is hard is not despair. It can be the opposite.

03

Connected facet

The embodied answer to the analytical question — how clarity under pressure is trained, not just thought. On yoga, philosophy, and the 360 view.

Clarity as Practice →

What Credibility Means Now

In a world where trust has collapsed and expertise is routinely treated as agenda, the question of what makes a claim credible is not academic. Credibility is not conferred by accreditation alone, nor defended by institutional affiliation, nor established by volume or visibility. It is demonstrated — through work that has cost something, through positions held under pressure, through a body of analysis that exists as evidence of what you actually think when the answer is inconvenient.

That kind of credibility is distinct from the productive use of AI as a working instrument. One is about the formation of judgment — slow, costly, irreplaceable. The other is about the execution of it — faster and better with good tools. Confusing the two is one of the more consequential errors of this moment.

Spinbound — the publication I have built over years, and which I did not choose so much as find I could not leave — examines how claims survive or collapse when evidence, incentives, and narratives collide. It exists because the questions it addresses were too important to abandon, and because the circumstances kept making that abandonment impossible. That kind of work, sustained under that kind of pressure, is what I mean when I talk about credibility. Not a platform. A record.

In the current environment, that distinction matters more than it ever has.


I work in this space because it is where the effort is most needed and least well understood. Not as a solution. As a contribution — from someone who has spent a long time on the terrain, who has paid something for the positions she holds, and who is not prepared to look away.

Also in this facet

Research

Finding Dr Claire Clark

AI-assisted historical research into an Australian scholar and public servant whose record was nearly lost to the archive.

Archive

From the archive — selected work from the Japan beat

Journalism filed from Tokyo across the 1990s — recovered and restored.