Your content doesn’t need another tool — It needs intelligence that shows up
Your content doesn’t need another tool — It needs intelligence that shows up
This is a long piece — around 17,000 words tracing a 25-year paradigm shift. If you’d rather start with the shape of the argument before committing to the full read, the podcast episode covers the key ideas in about 20 minutes, and the video presentation walks through the framework. Both are at the bottom of this post. If they catch your interest, the deep dive will be here when you’re ready.
Every tool you use demands the same thing: bring your content here, reshape it to fit, and maintain it alongside all the other copies you’re already juggling. We lose eighteen working days a year to this ritual. But a 25-year paradigm shift — from responsive design through Spotify to the protocol layer now dissolving the walls between AI systems — reveals that the problem was never the tools. It was the direction of travel. This post traces the inversion, proposes a practical framework for it, and confronts the risks of getting it wrong.
Don’t breathe on it
I remember the sensation before I remember the decade.
Early 1990s, Gothenburg. I was prototyping a medicine tablet package for a client competition — black-and-white drawings coloured with airbrush, a transparent film folded to look like the real thing. The film was too thick. It kept springing back flat. I worked two days without sleep. On presentation day, I sat in the room next door, frantically taping the film into place while my colleagues stalled the client for an hour and a half. Every time it held, another edge lifted. So sensitive you could barely breathe on it.
That feeling — the medium fighting back against everything you’re trying to make it do — never left me. It just moved.
It moved to the web. Transparent GIFs in HTML tables. Pixel-perfect layouts that collapsed if a single image failed to load. Netscape and Explorer rendering the same page differently. The same fragility, different material.
In 2000, John Allsopp named what I’d been feeling. His essay “A Dao of Web Design” argued that the web’s nature is fluidity, not fixity. We were treating it like a printed page. “The control which designers know in the print medium,” he wrote, “is simply a function of the limitation of the printed page.” The web doesn’t have those limitations. Stop fighting the water.A decade later, Ethan Marcotte turned Allsopp’s philosophy into technique — responsive web design. The layout adapts to the screen. The first wave of letting go.
But the deeper fragility didn’t go away. It moved somewhere harder to see.
Think about your Tuesday afternoon. You have an idea during a meeting. You type it into a notes app. Later, you copy it into a document to develop it. You find useful research and save it somewhere else. You share a summary in Slack. Colleagues respond. You go back to the document, update it, hunt for the research you saved in a different tool, copy the relevant parts back.
The same idea now exists in four places. None of them is the “real” one. And you’ve spent twenty minutes not on the idea — on the logistics of carrying it between tools.
This is the modern version of breathing on the medicine package. The assumption baked into every piece of software: your content must come to the tool. Notion wants your thoughts in Notion’s structure. Confluence wants your documents shaped to its model. Every tool is a destination that requires your content to arrive, reshape itself, and stay.
Research suggests knowledge workers lose 144 hours per year — eighteen full working days — to rebuilding context after switching between tools. Not doing the work. Re-establishing where they were so they can start doing the work.
And now we’ve built an entire cottage industry to manage this fragility. “Second brains” — Notion databases, Obsidian vaults, Roam graphs. The pitch: one system to rule them all. But the second brain isn’t a solution. It’s a symptom. It exists because none of our tools talk to each other, so we build a meta-tool to manage the other tools. We add orchestration on top of orchestration.
I call this the Duplication Tax — every copy, every reformat, every manual sync. A gentle, persistent drain that accumulates until eighteen working days are gone and you can’t say where they went. I’ve written separately about what I call the Orchestration Load Framework — a way to name and measure the different cognitive loads that tool-switching places on us: the cost of learning each tool’s logic, the cost of maintaining awareness across systems, the cost of context lost in every handoff. The Duplication Tax is what those loads feel like in aggregate. The reason we can’t fix it is that we’ve never had language for it.
Same fragility that fought me in Gothenburg. Deeper layer. Invisible because we’ve been living inside it so long we think it’s normal. But a pattern has been quietly building over the past twenty-five years that suggests it doesn’t have to be.
The four waves of unbinding
The Duplication Tax isn’t a design failure. It’s a paradigm — content goes to tools. That’s just how it works. Except it’s not. Over the past twenty-five years, the relationship between content and its container has been inverting in waves — each one removing a binding that the previous wave took for granted.
Wave 1: The screen unbound (2000–2010)
Before responsive design, web layouts were fixed — typically 960 pixels wide. Anyone on a different device got a broken experience or a separate “mobile site.” Allsopp’s philosophy and Marcotte’s technique changed that: fluid grids, flexible images, media queries. The interface adapted to the device rather than the other way around.
What was unbound: the screen.
But the content still lived inside the website. The layout was fluid; the destination was fixed.
Wave 2: The device unbound (2010s–)
Spotify didn’t just digitise music — it made the service follow you. You’re at the desk in a team meeting, listening on the computer. You walk away from the office — the mobile phone picks up. You get in the car — the car audio takes over. The service was never interrupted. You continue independently in whatever context you’re in. The service follows the user.
Teams did the same for communication: start a call on the laptop, walk out the door, the call transfers to your phone. Get in the car — the car equipment takes over. The session follows the person, not the hardware.
In 2017, I stood in front of my department and tried to articulate this shift. The framework I used came from Fjord (the futures research agency, later acquired by Accenture) — three approaches to how services relate to devices. I called them the Three Cs in my presentation:
- Consistent: Same content across devices. Responsive design — one website fits all screens.
- Continuous: The service flows between devices. Not replicated — taken over. Spotify. Teams. Start here, continue there.
- Complementary: Different devices play different roles. Your phone authenticates your desktop banking. Each device contributes its strength without duplicating the others.
I ended with questions: “How will tomorrow’s multi-device ecosystem affect our roles? Do we need new methods, tools, ways of working?” I didn’t realise those weren’t just questions about devices. They were questions about a deeper structural shift I couldn’t yet see.
What was unbound: the device.
Wave 3: The ecosystem unbound (2024–)
In March 2025, OpenAI deprecated their proprietary Assistants API and adopted Anthropic’s Model Context Protocol. Then both companies donated MCP to the Linux Foundation.
Why abandon your own framework for a competitor’s open standard? Because MCP solves the N×M connectivity problem. Twenty AI models and fifty data sources would require a thousand custom integrations. MCP requires seventy standard connections. “USB-C for AI” — any intelligence connects to any data source without bespoke wiring.
Over 16,000 MCP servers. 97 million SDK downloads per month. And in Q1 2026, $285 billion wiped from software valuations. The “SaaSpocalypse.” The walled-garden model — build a tool, trap content inside it, charge rent — is cracking. Software is shifting from destination to substrate.
What’s unbound: the tool ecosystem.
Wave 4: The Application Unbound (Emerging)
The operating system paradigm is shifting: procedural (you operate the tool), to intent-based (you state a goal), to ambient (intelligence is simply present wherever content lives). No application to open. No interface to learn.
Wave 1 freed the layout from the screen. Wave 2 freed the service from the device. Wave 3 freed intelligence from the tool. Wave 4 frees the user from the application itself. Each wave is the same structural signature applied deeper — and each makes the previous one look like an optimisation of the old binding rather than a true inversion.
What’s unbound: the application itself.
But here’s what the four waves reveal if you look at them together — not at what they freed, but at what they exposed.
Each wave removed a binding: the screen, the device, the ecosystem, the application. And each time, the same thing became visible underneath: the real work was never inside the tool. It was in the space between them. The handover. The context that had to be rebuilt every time you left one environment and arrived at another. The meaning that got lost in translation.
We’ve been staring at the tools — measuring them, optimising them, comparing their features — while the actual cost accumulated in the transitions. The Tuesday afternoon from Part 1, the 144 hours per year, the Duplication Tax — none of that happens inside Notion or Confluence or Jira. It happens in the gaps. When you leave one tool and arrive at another. When you reconstruct what you were thinking. When you reformat something that was already formatted perfectly well somewhere else.
Organisations measure what happens inside tools. They count tasks completed in Jira, documents created in SharePoint, messages sent in Slack. But nobody measures what happens between them — the ambient contextual work of carrying meaning across boundaries. And that, it turns out, is where most of the cognitive cost lives.
The four waves have been progressively revealing this. Responsive design showed that the layout wasn’t the point — the content was. Service mobility showed that the device wasn’t the point — the session was. Protocol interoperability is showing that the tool isn’t the point — the context is. And ambient intelligence will show that the application isn’t the point — the work is.
The same Three Cs that described how services relate to devices in 2017 now describe how intelligence relates to content. Consistent, Continuous, Complementary — just at a deeper layer. In Wave 2, the service followed the user. Now the question is broader: can intelligence move to where the content already lives — and carry back the results?
That question leads somewhere specific. And to get there, we need to reverse the direction of travel.
The inversion
Let me start with what it looks like in practice, because this is where the shift from tools to context becomes concrete.
I use Google NotebookLM as a source library — documents, research, notes organised by topic. Alongside it, a conversational AI connects to the Notebook as a source. When I ask the AI a question, it reaches into the Notebook, finds relevant material, synthesises it, and responds. No copying. No reformatting. No migration.
But here’s what makes this more than a clever integration: it’s not just the sources from NotebookLM that become available. It’s the functionality. The Notebook’s ability to structure knowledge, to surface connections between documents, to generate audio overviews — those capabilities are accessible from the conversational AI without rebuilding them. The AI doesn’t just read the Notebook’s data. It uses the Notebook’s strengths. Each tool operates in its own environment, contributing what it does best, while intelligence moves between them and carries the results back.
Now compare that with the old version of the same task. Gather research in one tool. Copy the relevant pieces into the AI chat. Get a response. Copy the response back. Need more context — go back to the research tool. Copy more. Paste again. I’m the integration layer. The human middleware carrying meaning between systems that can’t talk to each other.
The difference between these two workflows isn’t speed. It’s the direction of travel.
In the old workflow, content travels to where intelligence is. I carry my material to the tool, the tool processes it, I carry the result back. The context moves; the tool stays still.
In the new workflow, intelligence travels to where content is. My material stays where it lives. The AI arrives, does its work within that context, and the output stays alongside my existing files. The content stays still; the intelligence moves.
This reversal of direction is what I mean by the inversion. For decades, the default has been: content goes to tools. The inversion says: intelligence comes to content. It’s not a new feature added to existing tools. It’s a structural change in which direction things flow.
And the reason it matters — the reason it’s not just a technical rearrangement — goes back to what the four waves exposed. If the real work is in the context between tools, then every time you force content to travel to a tool, you’re creating that in-between work. You’re generating the handovers, the reformatting, the context loss. The old direction of travel creates the Duplication Tax. The inversion eliminates it — not by building a better tool, but by removing the need for content to travel at all.
This is the distinction that makes everything else in this post concrete: structural versus ad hoc.
When intelligence shares space with content — when the AI is present in the environment where your work already lives — that’s structural. The logic and the material coexist. There’s no gap between them for context to get lost in.
When you carry content to intelligence and carry results back — copy-pasting into ChatGPT, exporting to an analytics tool, uploading to a separate platform — that’s ad hoc. It might be faster than doing the work manually. But the architecture is the same as it ever was. Content migrates to the tool. The Duplication Tax applies. And the real work — the contextual, ambient work of carrying meaning across boundaries — remains entirely on you.
So how do you tell the difference? And more importantly — how do you move from ad hoc to structural?
The core test
At every decision point, one question clarifies everything:
Does this make AI structural — intelligence sharing space with content — or does it create a faster ad hoc workflow where content still travels to intelligence?
If the answer is ad hoc — if users still need to carry content to intelligence and carry results back — you haven’t inverted. You’ve optimised the old paradigm. You’ve built a better transparent GIF.
This is harder to apply than it sounds, because ad hoc can feel like progress. A team that uses ChatGPT to draft emails is getting real value. A designer who generates variations in Midjourney is working faster. A developer who pastes code into an AI for review is catching bugs earlier. None of these are bad. But none of them are structural. The content still goes to the tool. The value evaporates the moment the user stops manually carrying things between systems.
Structural integration looks different. It looks like intelligence already being present when you open the document. It looks like your file system being legible to an AI that can act within it. It looks like an agent that reads your project folder, understands the context, does its work, and saves the output alongside your existing files — without you ever leaving the environment where you were already thinking.
How to think about the inversion
If the paradigm shift is real — and the evidence across four waves suggests it is — then how do you actually make it happen? Not in the abstract, but in your team, your organisation, your Tuesday afternoon?
I’ve been trying out a practical framework for this, and it starts not with AI but with something much less glamorous: an inventory of what’s already connected to what. What surprised me was that the first useful thing wasn’t a new idea — it was looking at what was already there with different eyes.
1. Start with the integration landscape.
Before touching user journeys or workflows, map the existing integrations in your system. Most organisations have accumulated them over years — some essential, some legacy, some existing purely because content had to migrate between tools.
For each integration, ask: is this structural (these systems genuinely need to exchange data for a real business function), migration (this exists because content had to be moved to fit the tool’s data model), or synchronisation (this keeps the same content consistent across multiple tools that each maintain their own copy)?
Migration integrations are your primary inversion candidates. If intelligence can reach the content where it lives, the migration becomes unnecessary. Synchronisation integrations are secondary candidates — if there’s a single source of truth and intelligence can access it from anywhere, the sync layer dissolves. Structural integrations stay. And they become simpler, because they’re no longer carrying migration and sync overhead.
The validation is simple: does removing migration and sync integrations reduce overall system complexity? If yes, proceed. If the proposed change adds complexity on top of existing integrations, stop and reassess. The goal of the inversion is to reduce orchestration, not add another layer of it. Once I’d done this for my own setup, the next question was obvious: if these integrations exist because content has to travel, where exactly does it travel?
2. Then trace the context path.
This is where service design earns its place. Use user journeys and service blueprints — the same tools we’ve used for years — but with a specific analytical lens. Remember: the service needs to follow the user’s context. At every touchpoint, ask: where does content move? Why does it move? What transforms when it moves? What’s duplicated? What breaks?
Don’t map at the task level — “the user creates a report.” Map at the activity level: the user gathers data in Tool A, copies it to Tool B, reformats it to fit Tool B’s structure, adds analysis in Tool C, exports to Tool D for review, receives feedback in Tool E, returns to Tool B to update. Each handoff is a migration point. Each migration point is a candidate for inversion. Each carries a cognitive cost that nobody is measuring but everyone is feeling.
Connect the migration points into a context path — the route content takes through the user’s actual process. Where does the path stay within a single context? That’s low friction. Where does it cross between contexts? That’s high friction. The high-friction crossings are your inversion priorities. This is where my 2017 presentation came back to me. The Three Cs I’d borrowed from Fjord to describe how services relate to devices — they turned out to describe something much broader.
3. Then classify what kind of inversion is needed.
This is where the Three Cs come back — not as a device strategy this time, but as an AI integration strategy.
- Consistent: The same content needs to appear across multiple channels or surfaces. Currently it’s duplicated and reformatted for each one. In the inverted state, content stays structured in one place and intelligence presents it appropriately for each context. Think: product information maintained once, rendered differently on a website, a mobile app, an internal dashboard, and a partner portal.
- Continuous: A process flows across devices, contexts, or time, and the user needs to pick up where they left off. Currently, users manually re-establish context when switching. In the inverted state, the session and context persist and transfer automatically. Think: starting a document review on the desktop, continuing annotation on a tablet during the commute, finalising approval on the phone. Same process, same state, different devices.
- Complementary: Different tools contribute their specific strengths to the same workflow without duplication. Currently, users import and export between tools, maintaining parallel copies. In the inverted state, tools are linked by their capabilities, each operating in its own environment but connected. Think: a knowledge base connected to a conversational AI connected to a presentation tool — each doing what it does best, linked rather than merged.
Each C demands different technical architecture, different governance, and different design. Misclassifying a Complementary need as Consistent — trying to put everything in one place — recreates the monolithic problem. Misclassifying a Consistent need as Complementary — linking separate systems for content that should simply be unified — creates unnecessary complexity. Getting the classification right was the step that took me longest. Once I had it, the design question became surprisingly concrete.
4. Then design the inversion.
For each migration point that passes the checks, think in four layers:
- What stays — the user’s content structure, naming conventions, organisational logic. This is the anchor. In the inverted paradigm, the user’s own structure IS the architecture. The AI adapts to it, not the reverse.
- What arrives — the intelligence, the capability, the processing that currently requires a tool visit. In the inverted state, this arrives at the content’s location. Via an agent, an MCP connection, an embedded capability.
- What connects — the protocol layer that makes the arrival possible. MCP, APIs, agent frameworks, authentication.
- What’s governed — security, audit trails, data governance, compliance. Every connection is a surface. Every surface needs accountability.
I can hear the enterprise architects reading this and thinking: it’s not that simple. And they’re right — decades of integration architecture have taught them how tangled these systems get. But that’s precisely the point. The outcome of this assessment isn’t adding another integration layer. It’s removing complexity. Every migration integration you eliminate is a connection that no longer needs maintaining, monitoring, or debugging. Every sync layer that dissolves is a source of truth that no longer conflicts with itself. The inversion succeeds when the system gets simpler, not when it gets more connected. If that framing resonates with the architects, the rest of the conversation gets much easier.
The maturity question
This isn’t an overnight transformation. It’s a spectrum, and most organisations will operate across multiple levels simultaneously.
At the most basic level — where most organisations are today — AI is an external consultation. Workers copy-paste to ChatGPT. Shadow AI runs on personal subscriptions. Intelligence is disconnected from content. This is Level 0: ad hoc.
One step up, some tools are linked. Intelligence can read content from select sources, but content still primarily lives within tool-specific ecosystems. Integration is partial. Level 1: connected.
Further along, content has a clear sovereign location. Intelligence arrives at content rather than content migrating to tools. Migration integrations are being eliminated. Level 2: context-first.
At the far end — and this is where things get philosophical — intelligence is present wherever content lives. The system anticipates needs. No application to visit. The user’s structure is the architecture. Purpose-built tools remain where governance requires them, connected to the ambient layer. Level 3: ambient.
The important insight is that Level 3 is not the goal for everything. Some processes should stay at Level 0 — deliberately purpose-built, because the domain demands constraints, expertise development, or regulatory compliance. The inversion framework isn’t a mandate to invert everything. It’s a lens for identifying where the inversion creates value and where it would destroy it. Which brings us to the honest objections.
The hollowed mind and other honest objections
I want to take the counterarguments seriously. Not as “things people who don’t understand will say,” but as genuine structural problems that limit where and how the inversion can be applied. Because if the paradigm is as powerful as I’ve been arguing, it’s also powerful enough to cause serious damage if applied without thinking.
The security surface
Every connection is an attack vector. When intelligence can reach into content wherever it lives — across file systems, databases, APIs, and services — the security surface expands dramatically. Prompt injection attacks have increased by 540 per cent since agentic AI architectures became mainstream. An MCP server that gives an AI agent access to your file system also gives any compromised prompt a path to your file system.
This isn’t a problem that will be solved by better passwords. The architecture itself creates a new category of vulnerability: one where the attack surface grows with every integration you add. The more connected the system, the more exposed it becomes.
The governance gap
In the old paradigm — content goes to tool — at least the tool could enforce rules. Documents in SharePoint inherit SharePoint’s permissions. Data in a regulated database is governed by that database’s audit trail. When intelligence reaches into content across systems, the question becomes: whose rules apply?
If an AI agent reads a document from System A, combines it with data from System B, and generates output in System C — which system’s governance applies to the output? Who audits the reasoning? Where’s the accountability trail? Current agentic architectures have what researchers call the “ephemeral identity problem”: the agent acts on behalf of the user but doesn’t have a persistent identity in any of the systems it touches. It’s a ghost in the governance framework.
For industries with regulatory requirements — healthcare, finance, legal — this isn’t a philosophical concern. It’s a compliance barrier. And it’s one reason why some processes should remain purpose-built: not because the inversion wouldn’t work technically, but because the accountability architecture doesn’t yet exist.
The quality floor
Here’s an uncomfortable fact about large language models: calibrated models must hallucinate. This isn’t a bug that will be fixed. It’s a mathematical property of how probability distributions work in neural networks. A model that is well-calibrated — meaning it’s honest about its uncertainty — will necessarily generate some outputs that are wrong, because perfect calibration means occasionally saying high-confidence things that happen to be false.
When AI is an external consultation — ad hoc, copy-paste — the user is the quality filter. You read the output, evaluate it, decide whether to use it. But when intelligence is ambient, present everywhere, integrated into the flow of content — who filters? If the AI’s output is saved alongside your files, mixed into your context, used as input for the next interaction, errors propagate invisibly. The quality floor isn’t the AI’s accuracy rate. It’s the compounding effect of small errors across an integrated system.
The hollowed mind
This objection is like a small stone in your shoe. You know it’s there. You know you should stop and deal with it. But you keep walking because everything else is moving so fast.
When tools require you to learn their logic — their data model, their constraints — that learning curve isn’t just friction. It’s education. A designer who masters Figma develops spatial reasoning. An analyst who learns SQL develops data modelling intuition. If intelligence handles the tool-work, what happens to that learning?
The OECD published data showing that 80 per cent of students who use AI writing tools cannot independently recall what they wrote. Not the phrasing — the ideas. The essay gets submitted, the grade comes back. But the thinking that was supposed to develop in the process? Hollow. The performance was rented, not earned.
But here’s where I think the objection deserves more nuance than it usually gets. We might be judging with old-paradigm criteria. The old paradigm valued deep mastery of specific tools and techniques because that’s how work got done — you had to know the tool to produce the output. In the inverted paradigm, the relationship between knowledge and work shifts.
What I need in my own practice isn’t deep mastery of every system I touch. It’s awareness — understanding how the systems work in principle, at the level that’s relevant to my work. Knowing enough to judge, to validate, to steer. The concern isn’t losing tool-specific skills. It’s losing the capacity for reasoning, for problem-solving, for the kind of thinking that underpins all the tools. That’s the real “hollowed mind” — not the loss of specific competencies, but the erosion of the cognitive foundation those competencies were built on.
And here the data is concerning. AI-assisted teams complete tasks faster, but the quality of reasoning doesn’t scale with the speed. Weekly metrics improve. Quarterly innovation doesn’t. The thing that degrades — the slow accumulation of judgement and expertise — is exactly the kind of thing that doesn’t show up in a dashboard. If knowledge isn’t practised in experience, the investment in deep learning becomes hollow. The question is how much depth you need, and in what — and that calculation is different in a world where intelligence can arrive at your content.
The sovereignty trap
The final objection is perhaps the most subtle. If intelligence is truly ambient — present wherever content lives, anticipating needs, acting proactively — then who is directing whom?
The paradigm promises cognitive sovereignty: your structure remains the architecture, intelligence adapts to you. But ambient systems have a way of shaping the context they inhabit. Recommendation algorithms were supposed to help you find what you wanted. Instead, they shaped what you wanted. Social media platforms were supposed to connect you with your friends. Instead, they redefined what friendship looks like.
When intelligence follows content — when it’s always there, always helpful, always suggesting the next step — the question isn’t whether it’s useful. The question is whether your choices are still your own, or whether you’re navigating a landscape that the intelligence has quietly reshaped around you.
None of these objections invalidate the inversion. But they define its boundaries.
The security surface means the connection layer needs genuine architectural attention. The governance gap means some domains need purpose-built tools with explicit accountability. The quality floor means humans must remain in evaluation loops, not just execution loops. The hollowed mind — the erosion of the cognitive foundation beneath our tools, the thinking and reasoning capacity that no AI can rent back to us — means we need to pay deliberate attention to cognitive sovereignty: the ability to think, judge, and decide independently of the systems we use. Not as a vague aspiration, but as something we actively design for — with structured thinking modes, with ways of measuring what’s being gained and what’s being lost, with systems that keep the human cognitively engaged even when the AI could do it all. And the sovereignty trap means the inverted paradigm needs something that pure ambient intelligence doesn’t naturally provide.
It needs friction. But not the old friction — not the duplication tax, not the context-switching. A different kind.
Designed friction and the question that remains
Let me return to the water metaphor one last time.
Allsopp was right in 2000: the web’s nature is fluidity. Fighting it creates fragility. But water without banks isn’t a river — it’s a flood. The strength of water isn’t that it flows everywhere. It’s that it flows between things. The banks give the river its direction, its force, its usefulness. Without them, you just have a swamp.
The inversion I’ve been describing — intelligence following content, the four waves of unbinding, the dissolution of the tool-as-destination — is the water finding its natural flow. After twenty-five years of trying to build rigid containers for something that wants to be fluid, we’re finally beginning to let go. MCP is the riverbed. Ambient intelligence is the current. The content stays where it is, and intelligence flows to it.
But the honest objections in Part 4 are the banks.
If we let intelligence flow without any friction — without any point where the human must stop, evaluate, decide, reckon with the material — we get the hollowed mind. We get the productivity mirage. We get the sovereignty trap. We get fast water and no direction.
What the inverted paradigm needs isn’t the old friction. Not the friction of carrying content between tools, of rebuilding context, of learning seventeen different interfaces for what is essentially the same task. That was waste friction — the kind that drains your eighteen working days and leaves nothing behind. The inversion rightly eliminates it.
What it needs is designed friction.
Deliberate, intentional moments where the human must engage cognitively with the material. Not because the system is badly designed, but because the engagement itself is the value. Moments where you evaluate the AI’s output rather than accepting it. Where you make a decision rather than following a suggestion. Where you structure your own thinking rather than letting the ambient system structure it for you. Where the tool asks you a question instead of providing an answer.
This is a design challenge, not a technology challenge. And it’s the design challenge of the next decade.
Because here’s the paradox at the heart of the inverted paradigm: the same unbinding that frees us from waste friction also removes the incidental friction that was quietly training us. The struggle with the tool that taught us the skill. The reformatting that forced us to re-engage with the material. The context switch that made us notice what we’d been taking for granted. Some of that friction was productive — not because it was well-designed, but because it was there. Learning happened in the cracks.
In the ambient world, the cracks close. Intelligence is seamless, present, anticipatory. The question is whether we can design new cracks — intentional ones, productive ones — that preserve human agency within a system that’s optimised to remove the need for it.
I don’t have a complete answer to this. But I’ve been building toward one.
Over the past two years, this question — how do you keep humans cognitively sovereign inside ambient intelligence? — has become the central thread of my work. It’s led me to develop structured thinking modes for navigating complex problems, ways to measure the hidden orchestration costs that the old paradigm made invisible, and methods for identifying what goes missing when we design systems without accounting for human behaviour. Not as abstract theory, but as practical tools I use every day — and that I’m building into something larger.
The specifics of that work are for future posts. But the foundation is what this post has been about: understanding why the paradigm is inverting, seeing that the shift is from tools to context, recognising that the inversion eliminates waste friction but doesn’t automatically preserve the cognitive engagement we need. The designed friction isn’t an afterthought. It’s the entire design challenge that follows from the inversion.
When production is free, judgment is the work
When the AI can generate the output in seconds, the value shifts from production to judgement. And judgement requires friction. Not the accidental friction of broken tools, but the designed friction of intentional cognitive engagement.
But the shift goes deeper than production versus judgement. What’s actually changing is the direction of focus itself. We’ve built our entire working culture around output — measuring it, optimising it, celebrating it. AI accelerated that focus: faster drafts, more variations, instant results. But when output becomes abundant, it stops being the scarce resource. What becomes scarce is the quality of what goes in. The structuring of sources. The governance of what enters the system. The judgement about which inputs matter and how they should flow through the process. The paradigm isn’t just inverting from tools to context — it’s inverting from output focus to input focus. And our role shifts with it: from ensuring production quality to ensuring input quality, from making things to judging what goes in and how it progresses through to the result. That’s the shift nobody’s talking about yet.
The paradigm shift is happening regardless of whether we name it or not. The data is there. The adoption curves are there. The $285 billion in evaporated software valuations is there. Content used to go to tools. Intelligence is coming to content. The four waves have been building toward this for twenty-five years, and the direction isn’t going to reverse.
But how we inhabit it — that’s the design challenge. Not the technology, not the protocols, not the architecture. The question is whether we can build the banks that give the river its direction: the deliberate cognitive engagement that keeps human agency intact inside the flow.
In Gothenburg in the 1990s, I learned what happens when you fight the medium. The medium wins. The paradigm that Allsopp named and Marcotte codified has been unbinding content from its containers for a quarter of a century, and we’re finally approaching the last binding — the application itself.
We can stop breathing carefully around our workflows. That fragility is ending. But what comes next requires something we’ve never had to design before: the right friction, in the right places, to ensure that the intelligence arriving at our content actually helps us think better — rather than quietly making our own thinking unnecessary.
Disclaimer
AI-assisted content: This post was researched and co-authored with Claude (Anthropic), with additional deep research conducted via Google Gemini and Perplexity AI. The personal experiences, frameworks, and analysis are the author’s own. AI tools were used for literature synthesis, source discovery, counterargument generation, and drafting support.
Opinion note: This is a personal exploration blog. The views, frameworks, and interpretations expressed here are my own, grounded in over twenty years of UX practice but not representing any organisation or institution.
Source attribution: Research draws on peer-reviewed papers, industry reports (OECD, Deloitte, McKinsey), technical documentation (Anthropic, Linux Foundation, GitHub), and historical web design literature. Key sources are listed below.


























