From rituals to readiness — A UX practitioner’s self-assessment for the AI shift
From rituals to readiness — A UX practitioner’s self-assessment for the AI shift
In Part 1, we looked at the Orchestration Load Framework from the outside — a new model for understanding what happens to human cognition when AI enters the workflow. But there’s a harder question hiding behind the theory, and it’s the one that kept me up at night: what happens when you point that lens at UX practice itself? When you audit not the tools your users work with, but the system you work in? The diagnosis is uncomfortable. The opportunity is enormous. And the transition has already started whether we’re ready or not.
The week you recognise
I want to describe a week. Tell me if it sounds familiar.
Monday morning. Standup at 9:15. You report on the screens you pushed to review on Friday. Someone asks about the edge case you haven’t had time to think about, so you improvise an answer that sounds reasonable. After standup, you open Figma. The component library needs updating because the design system team changed the spacing tokens over the weekend. An hour disappears.
By 11 o’clock, you’re in a refinement session, translating a product requirement into something developers can estimate. The requirement is vague — “improve the onboarding experience” — but it’s already been sized and slotted into the sprint. You’re not designing the onboarding experience. You’re fitting a design into a container that was shaped before you arrived.
Lunch is at your desk because the afternoon is back-to-back: design review at 13:00, a cross-team alignment meeting at 14:00, a stakeholder walkthrough at 15:30. The design review focuses on whether your date picker matches the component library. Nobody asks whether the date picker is the right pattern for the problem. The alignment meeting produces three action items, all of which involve updating Figma files. The stakeholder walkthrough goes well — they like the colours.
By 16:30, you have forty-five minutes of unscheduled time. You’d planned to revisit the user journey map you started sketching two weeks ago. Instead, you spend it responding to comments in Figma, updating the handoff documentation, and answering a Slack thread about icon sizes.
You go home. You did your job. You delivered everything that was asked.
And somewhere in the back of your mind, the same quiet thought you’ve been having for months: I’m not doing what I’m supposed to be doing.
You’re right. You’re not. And it’s not your fault.
Where 40 hours actually go
When I started applying the Orchestration Load lens to UX practice itself — using the same analytical framework we built for evaluating AI tools — the first thing I needed was data on what designers actually do with their time. Not what the methodology says they should do. What they actually do.
The research paints a picture that most practitioners will recognise immediately but rarely see quantified. Across industry surveys, time-tracking studies, and practitioner discourse, the pattern is remarkably consistent:
Figma production and specification: 12-15 hours per week (30-38%). This isn’t “design” in the sense the discipline means it. This is component adaptation, auto-layout wrestling, responsive variant management, and specification preparation. The creative tool has become a specification engine. 61% of designers now use Figma as their primary handoff mechanism — which means Figma isn’t where you design; it’s where you document what you’ve already decided. Except you haven’t had time to decide anything, because you’ve been in Figma.
Design system maintenance: 4-6 hours (10-15%). Keeping the library accurate, documenting changes, reconciling inconsistencies. Research from Shopify shows that 67% of design system team time goes to documentation. After two years, design system accuracy drops to 31%. You’re maintaining a system that’s decaying faster than you can maintain it.
Process ceremonies: 5-7 hours (12-18%). Standups, planning sessions, refinement, retrospectives, design critiques. Each one individually reasonable. Collectively, they consume an entire working day.
Cross-team communication and alignment: 4-5 hours (10-12%). The research surfaces something practitioners feel but rarely articulate — there’s an invisible practice I’ve started calling “prewiring.” Engaging stakeholders 24 hours before a review to mitigate objections. Diagnosing each stakeholder’s incentives and fears to frame work in terms they’ll accept. This informal political labour is essential but never tracked in any project schedule.
Documentation and handoff: 3-4 hours (8-10%). Preparing specifications so that development can implement what you’ve designed. 90% of designers report differences between their designs and what gets built. The documentation exists to close that gap. It never fully does.
Context recovery: 2-3 hours (5-8%). Re-finding things. Re-reading notes. Reconstructing where you left off after a meeting broke your focus. 68% of designers document in multiple locations, which means finding what you documented is itself a task.
Now add it up. That’s 30-37 hours consumed by production, maintenance, process compliance, and boundary translation.
Actual design thinking: 3-5 hours per week. Eight to twelve percent.
And the thing that the discipline exists for — understanding users, validating assumptions, testing whether what we’re building actually serves people? User research gets 0-2 hours. Zero to five percent.
Let that land. The methodology that defines our profession — discover, synthesise, ideate, prototype, test — gets less than 15% of available capacity. Not because designers don’t know how to do it. Because the system consumes everything before the strategic work begins.
Have you ever tracked your own week this closely? And if you did, would the numbers surprise you — or just confirm what you already felt?
Three process architectures & a prayer
The time allocation data tells you what’s happening. But it doesn’t explain why. To see the mechanism, you need to look at the environment designers operate in — and specifically, at the fact that UX practice doesn’t exist in one process. It exists at the collision point of three.
The three-layer process collision
Layer 1: The design process. Double Diamond. Discover, Define, Develop, Deliver. Or some local variant — Design Thinking, Lean UX, whatever flavour your organisation adopted. This process assumes dedicated time for research, synthesis, and iteration. It assumes you understand the problem before you commit to a solution.
Layer 2: The development process. Agile. Scrum. Two-week sprints. Stories estimated in points. Velocity tracked on dashboards. This process assumes work can be decomposed into small, deliverable increments. It optimises for throughput.
Layer 3: The organisational process. Quarterly OKRs. Annual roadmaps. Budget cycles. Stakeholder reviews. This process assumes work can be planned months in advance and measured against predetermined outcomes.
Here’s the structural problem: these three processes were never designed to work together. They run simultaneously, on different timescales, with different success criteria. And the designer sits at the intersection of all three, translating continuously between them.
The design process says: understand the problem before solving it. The development process says: deliver something this sprint. The organisational process says: hit this quarter’s targets. When these three demands collide — and they collide every Monday morning — the design process loses. Every time. Because understanding a problem takes uncertain time, delivering a sprint increment takes exactly two weeks, and quarterly targets have executive visibility.
What falls out? Research. Synthesis. Validation. Testing. The activities that take uncertain time, produce ambiguous outputs, and resist sprint-sized packaging. These aren’t cut because anyone decides they don’t matter. They’re cut because the compound pressure of three simultaneous process architectures squeezes out anything that can’t be estimated, tracked, and delivered on a fixed cadence.
The designer becomes the boundary-spanning agent who must maintain fluency in all three process languages. They translate research insights into user stories. They translate design rationale into acceptance criteria. They translate creative exploration into sprint-compatible deliverables. Each translation is a cognitive boundary crossing — a Cx event in OL terms. And the compound load of translating across all three layers simultaneously is what practitioners feel as exhaustion, frustration, and the sense that “we’re not doing what we’re supposed to do.”
They’re right. They’re not. Not because they don’t know how — because the compound boundary load consumes the capacity they’d need to do it.
The pre-design failure pattern
There’s a specific mechanism worth naming, because once you see it, you can’t unsee it.
By the time a UX team is engaged to improve an experience, the roadmap is typically locked. Features have been pre-approved. Engineering effort has already been estimated. Strategic decisions were made — or should have been made — months earlier, in meetings the design team wasn’t invited to.
This means UX doesn’t fail during wireframing or prototyping. It fails before the designer touches a single screen. The decisions that determine whether the experience will succeed or fail have already been made based on internal consensus, executive intuition, and competitive parity — rather than external evidence.
Design becomes a mechanism for applying an aesthetic layer over unvalidated assumptions.
And here’s the compounding effect: once a mockup exists, stakeholders harden their positions. The psychological commitment to designs once visualised is well-documented. “Good enough” becomes the standard. The 1-10-100 rule — a dollar in design saves ten in development, saves a hundred in post-launch fixes — is universally cited and systematically ignored when velocity is the governing metric.
When was the last time your team killed a feature because the research said it wouldn’t work? Or does the research happen after the feature is already committed?
Map the methodology against reality
If the time allocation data shows where the hours go, and the three-layer collision explains why, then this section shows what it costs.
Map the formal UX methodology — the one we teach, the one we advocate for, the one that defines our professional identity — against what actually happens in practice:
-
- Discovery and research. Theory says 20-25% of project time. Reality delivers 0-5%. The deficit is critical. 61% of practitioners struggle to recruit research participants. 97% of organisations fall below strategic research maturity. The discovery phase that the double diamond requires? Routinely abandoned under sprint pressure.
- Synthesis and analysis. Theory says 10-15%. Reality delivers 2-3%. A third of what little research time exists gets consumed by reporting and synthesis overhead — making the research digestible for stakeholders rather than using it to inform design.
- Ideation and exploration. Theory says 15-20%. Reality delivers 3-5%. The exploration that generates novel solutions requires time without predetermined outcomes. Sprints don’t have time without predetermined outcomes.
- Prototyping and testing. Theory says 15-20%. Reality delivers 2-4%. Testing is, in the words of the research, “the first activity tossed overboard when sprint goals are at risk.” Which is to say: testing is tossed overboard nearly every sprint.
- Production and specification. Theory says 10-15%. Reality delivers 30-38%. Three times overweight. High-fidelity design-as-specification has become the structural pattern — not because it’s good methodology, but because it’s the only output format the three-layer process collision accepts.
- Process and administration. Theory says 5-10%. Reality delivers 25-35%. Five times overweight.
- Discovery and research. Theory says 20-25% of project time. Reality delivers 0-5%. The deficit is critical. 61% of practitioners struggle to recruit research participants. 97% of organisations fall below strategic research maturity. The discovery phase that the double diamond requires? Routinely abandoned under sprint pressure.
The pattern is stark: every phase that involves thinking is under-resourced. Every phase that involves producing and maintaining is over-resourced. The methodology gap isn’t about knowledge — designers know how to do research, synthesis, and testing. It’s about capacity. The system consumes all available capacity before the strategic work begins.
I want to be careful here, because this could sound like an indictment of practitioners. It’s not. It’s a structural diagnosis. The individual designer, in most organisations, cannot unilaterally change the sprint structure, the stakeholder review cadence, or the three-layer process architecture. They can only adapt to the conditions they’re given. And they have — by becoming extraordinarily good at production work, because that’s what the system rewards.
But what if the system is about to change?
The 25-30 hours that just opened up
AI is collapsing the production layer. Not incrementally — structurally. The tasks that consume 60-70% of a designer’s time are precisely the tasks AI handles well. And the tooling already exists.
-
- Component generation and adaptation. Galileo AI converts natural language descriptions into polished UI components pre-mapped to design systems, reducing iteration time by 40%. Uizard converts sketches to interactive prototypes in seconds. That three-hour date picker adaptation? It’s becoming a prompt-and-review cycle.
- Wireframing and layout. Relume generates entire sitemaps and wireframe structures from prompts — an 85% reduction in wireframe creation time. Figma AI provides smart layout suggestions, auto-layout nesting, and content generation, reducing repetitive layout tasks by 50-70%.
- Specification and handoff. Builder.io’s Visual Copilot maps Figma designs directly to production components with 100% fidelity to existing design systems. v0 by Vercel generates production-grade React components through multi-agent reasoning. A “No Handoff Methodology” is emerging as a viable alternative to the specification bottleneck.
- Design system maintenance. AI-powered auditing can detect inconsistencies, propagate changes, and flag deviations. The manual reconciliation work that fills Fridays begins to disappear.
- Process translation. AI can translate between UX artefacts and development tickets, generate acceptance criteria from designs, and produce documentation in the formats each discipline requires.
If AI absorbs 60-70% of the production and maintenance work, and a further 10-15% of the process translation work, the UX designer suddenly has 25-30 hours per week of liberated cognitive capacity.
That’s not a marginal improvement. That’s a transformation of what the role is.
But here’s the question that matters: what fills the freed capacity?
Because liberation is not automatically productive. And the system that compressed your practice before will try to compress it again — just faster. There are two scenarios, and which one plays out depends on choices being made right now.
Scenario A: The elevation
The freed capacity goes to the work the methodology always called for. User research — real conversations with real users — moves from 0-5% to 15-25%. Journey mapping spans the full cognitive workflow, not just the feature. Designers spend time understanding problems before solving them. The methodology stops being aspirational and starts being practised.
In this scenario, the UX role transforms from production specialist to cognitive architect. The value proposition shifts from “we make it look right and work right” to “we ensure the human-AI relationship serves the human.” The seat at the table becomes a seat at the strategy table.
Scenario B: The skeleton crew
The freed capacity gets absorbed by the organisation as cost reduction. If a designer can produce in 15 hours what used to take 40, the response is not “give them 25 hours for strategy.” The response is “we need fewer designers.”
This is not a speculative concern. Practitioners report being expected to work 50% faster under the justification that “AI can help you do the work.” The 11% layoff rate in UX is real. Fears of “90% disappearance” of junior positions circulate in practitioner communities. Leadership that accepts “passable” AI-generated output as “good enough” has no structural reason to invest in design excellence.
Historical precedent from other disciplines offers partial reassurance but not certainty. Architecture’s “digital turn” in the 1990s marginalised practitioners who couldn’t adapt while creating new demand for system-level thinking. Journalism’s automation displaced routine reporting while increasing the value of investigative work. MIT research shows that automation historically doesn’t eliminate labour — it shifts what’s valued. But the transition is not automatic, and not everyone navigates it successfully.
Which scenario plays out is not predetermined. It depends on whether UX practitioners — and the leaders who employ them — can articulate the value of what the freed capacity should be used for. And that requires naming what we haven’t been practising.
The gap you haven’t been allowed to see
This is the uncomfortable part. And I want to frame it carefully, because it needs to be heard as diagnosis, not as accusation.
When AI removes the production work, it exposes a gap that many practitioners may find uncomfortable: the strategic thinking skills that UX claims as core competency have been under-practised. In some cases, for years.
A designer who has spent 80% of their time in Figma for five years has deep production skills and shallower strategic skills — not because they lack the training, but because they haven’t had the practice. Research methods atrophy without use. Synthesis skills weaken without exercise. The ability to hold a complete journey in mind and reason about cognitive load at the system level is a muscle that requires regular engagement.
I want to say this directly: this is not a criticism. It’s a structural diagnosis. And it’s important to name because the temptation will be to fill the liberated capacity with more production — higher-fidelity mockups, more variants, more documentation, more Figma polish. More of what’s familiar, not what’s needed.
Remember the MIT EEG research I cited in Part 1? The finding that users who delegate cognitive effort to AI exhibit weaker neural connectivity across reasoning and memory networks — and that this reduced engagement persists even after the AI is removed? The principle applies to practitioners too. Five years of production-mode work creates cognitive patterns. Strategic mode requires effort to re-engage.
But here’s the thing that the anxiety obscures: the recovery is faster than you expect.
You have the training. You have the framework knowledge. What you need is practice — actual user conversations, actual journey mapping, actual friction classification decisions. The first attempts will feel awkward. By the fifth attempt, the muscle memory starts returning. The basketball player who spent five years maintaining the court didn’t forget how to shoot — they just haven’t been shooting.
And the production skills you built? They’re not worthless. They’re transferable. Systems thinking through design systems transfers to journey architecture. Constraint-based reasoning transfers to friction redistribution. Cross-functional translation — that invisible “stakeholder OS” you navigate every day — transfers directly to the cross-boundary design work that AI-era practice demands. Specification discipline transfers to sovereignty checkpoint specification.
You didn’t waste those years. You built real competencies. The frame is shifting, not the foundation.
What the practice becomes
Based on the full diagnosis, here’s what changes in core UX competency:
-
- Screen design becomes journey architecture. The unit of design moves from individual screens to complete cognitive journeys across tools and time. You stop asking “does this screen work?” and start asking “does this journey build capability?”
- Friction elimination becomes friction redistribution. Some friction builds capability — the research on “desirable difficulties” shows that strategic challenges during learning enhance long-term outcomes. The designer’s judgment determines which friction serves the user and which wastes their time. This is the distinctly OL-governed skill, and it has no precedent in traditional UX training.
- Activity-scoped projects become cross-boundary design. AI touches everything simultaneously. Designing for one activity creates what I called “pager solutions” in Part 1 — optimised within one segment, creating load at every transition. The new unit of design spans the boundaries between tools, modes, and contexts.
- Surface verification becomes outcome verification. Checking whether it looks right becomes checking whether it makes the user better. Visual QA and brand compliance don’t disappear — they just stop being the primary quality gate.
- Production expertise becomes cognitive architecture. The craft shifts from making artefacts to designing cognitive relationships. Figma proficiency becomes less important than judgment about what the specification should achieve.
Let me make this concrete with an example — because “cognitive architecture” sounds abstract until you see what it looks like in practice.
The analysis dashboard
(a worked example)
Imagine your team receives a brief: “Improve the analysis dashboard.” Traditional approach — you map the journey within the dashboard: Login → Select dataset → Configure filters → View results → Export report
You identify friction points: slow load times, confusing filter UI, limited export options. You design improvements. The dashboard gets better. Brief delivered. Now apply the OL lens.
Step 1: Map the full workflow
The user doesn’t start at the dashboard. They start with a question — triggered by an email from a stakeholder, or a pattern noticed in a morning meeting. The full journey:
Trigger (email/meeting) → Formulate question (internal) → Open dashboard → Configure analysis → Review results → Interpret findings → Draft conclusions (document) → Present to stakeholders (meeting)
The dashboard is stages 3-5 of an 8-stage journey. You’ve been designing a third of the experience.
Step 2: Profile the cognitive load
Walk through each stage and estimate the six OL components. What emerges is this: the user arrives at the critical verification stage — reviewing results, stage 5 — with depleted cognitive reserve. They’ve spent their executive function on tool management in stages 3 and 4. Context maintenance is high because they’re holding their original question in working memory while wrestling with filter configuration. Coordination cost is spiking because the tool demands technical attention at exactly the moment the user needs analytical attention.
The dashboard design didn’t create this depletion. The journey created it. And no amount of filter UX improvement will solve a problem that lives two stages upstream.
Step 3: Classify the friction
Filter configuration complexity? Overhead. Builds no analytical capability. Automate it. Waiting for data to load? Overhead. Pure waste.
The need to formulate the question before analysing? Productive. This IS the analysis. If AI formulates the question for the user, the user’s analytical capability atrophies. Preserve this friction. Interpreting results requires domain knowledge? Productive. This effort builds expertise. Support it but don’t replace it. Translating results into stakeholder language? Ambiguous. For a novice, it builds communication skills. For an expert, it’s routine overhead. Design for adaptive support.
Step 4: Map the boundaries
The most expensive boundary isn’t in the dashboard. It’s between the dashboard and the document. The user’s entire analytical reasoning — which filters they applied, which comparisons they made, why they drew these conclusions — gets lost at the export. Only the final numbers transfer. This means anyone reviewing the analysis later has no access to the analytical path.
This is a cognitive boundary problem that no dashboard redesign can solve. It’s a boundary design problem. And it’s invisible to any analysis scoped to the dashboard alone.
Step 5: Sketch the temporal trajectory
-
- Day 1: the user carefully configures the analysis, reviews results critically, cross-checks against expectations.
- Day 90: the user has memorised their standard filter configuration. Faster, yes. But they’ve also stopped questioning whether their standard configuration is still the right one. They export results more quickly, with less documentation of reasoning. Their “standard analysis” has become a ritual — performed the same way each time without questioning whether the question has changed.
The traditional approach would have improved filter UX, reduced load times, and enhanced export options. Useful improvements. Pager solutions. The OL approach reveals that the highest-value intervention is at the boundary, not in the tool. That the critical evaluation moment is undermined by the stages before it. That the user’s original question is the most valuable cognitive artefact in the whole journey. And that the Day-90 user needs different support than the Day-1 user.
None of these insights appear in a traditional journey map. All of them appear when you add the cognitive layer. That’s what cognitive architecture means in practice. Not abstract theory — specific, actionable design decisions that traditional methods can’t see.
Where to start
(depending on where you are)
I’ve painted a picture of a transformed practice — journey architecture, friction redistribution, cross-boundary design, temporal reasoning, sovereignty judgment. And if you’re feeling a mixture of excitement and anxiety right now, that’s the appropriate response.
So let me bring this down to earth. You don’t need to become a “cognitive architect” by next Monday. You need to start one practice, this week, and build from there.
If you’re a mid-level designer in a sprint team
You’re closest to the real workflow. You see the friction every day. Your production instincts are sharp. Start here:
-
- One conversation per sprint. Talk to one user. Not a formal study — a 15-minute conversation. “I’m designing this feature. Can you show me how you currently handle this?” This costs almost nothing and produces more insight than any amount of assumption-based design. If your organisation makes this difficult, that difficulty itself is the diagnosis.
- Classify one friction point per week. Take something from your most recent design. Is this friction productive (builds capability) or overhead (wastes time)? Write one sentence explaining why. Share it with a colleague. The disagreements are where the learning lives.
- Mark one boundary. In your next user flow, add one transition — where the user enters your feature from somewhere else, or leaves for something else. Note what context they carry in and what they lose. Just one. That’s the on-ramp.
- One conversation per sprint. Talk to one user. Not a formal study — a 15-minute conversation. “I’m designing this feature. Can you show me how you currently handle this?” This costs almost nothing and produces more insight than any amount of assumption-based design. If your organisation makes this difficult, that difficulty itself is the diagnosis.
If you’re a senior designer or design lead
You have more agency. You can influence what your team works on and how. Start here:
-
- Run one OL journey mapping session. Pick a current project. Minimum viable version: map the full workflow (not just the feature), mark the boundaries, classify one friction point, ask the Day-90 question. Ninety minutes. You’ll surface insights that traditional methods miss.
- Apply the brief review. At the start of your next project, ask seven questions. Does the brief define a capability outcome, not just a feature? Does it scope the journey, not just the screen? Does it include temporal requirements? Fifteen minutes. The most common finding: the brief defines a feature but not a capability outcome. Naming this gap is the first step toward closing it.
- Make one sovereignty argument. In your next design review, present one decision framed in sovereignty terms: “We preserved this friction point because removing it would make the user dependent on the tool. Here’s the capability it builds.” See how the room responds. You’re introducing the vocabulary.
- Run one OL journey mapping session. Pick a current project. Minimum viable version: map the full workflow (not just the feature), mark the boundaries, classify one friction point, ask the Day-90 question. Ninety minutes. You’ll surface insights that traditional methods miss.
If you’re a design leader
You create the conditions. Start here:
-
- Protect 10%. Allocate 4 hours per week per designer for non-production work. User conversations, journey mapping, friction analysis. Frame it as investment. Organisations with design leadership outperform benchmarks by up to 32% in revenue growth. The time investment is justified.
- Rewrite one brief. Take an incoming project brief and add the OL dimensions: capability outcome, friction classification, boundary awareness, temporal requirements. Show your product partner what a brief looks like when it protects the conditions for good design.
- Name the process collision. In your next retrospective: “We’re running three process architectures simultaneously — design, development, organisational — and nobody owns the integration. The compound cognitive cost falls on our designers.” Naming it is the first step toward governing it.
- Protect 10%. Allocate 4 hours per week per designer for non-production work. User conversations, journey mapping, friction analysis. Frame it as investment. Organisations with design leadership outperform benchmarks by up to 32% in revenue growth. The time investment is justified.
Each starting point takes less than an hour. Each introduces one new concept. Each builds toward the next.
The seat was always yours
I started this exploration — both Part 1 and this companion piece — with a nagging feeling that something was fundamentally off about how we practise design in the age of AI. Not off in a small way. Off in a structural way that the existing frameworks couldn’t quite capture.
What the diagnosis reveals is both uncomfortable and liberating. Uncomfortable because it names what many of us have felt but couldn’t articulate — the system turned our methods into rituals. The methodology is real. The conditions for executing it were not. We’ve been so consumed by the overhead of production, process, and boundary translation that the work the discipline exists to do — understanding users, designing for human capability, ensuring technology serves people — got squeezed into the margins.
But liberating because the conditions are changing. AI compression is removing the production overhead. The 25-30 hours are opening up. And the skills the new practice demands — journey-level thinking, friction redistribution, temporal reasoning, sovereignty judgment — are extensions of capabilities we’ve been building all along. The systems thinker who maintained a design system can think at the journey level. The cross-functional translator who navigated three process layers can design across boundaries. The specification discipline that made handoff precise can specify sovereignty checkpoints.
The question isn’t whether UX can make this transition. The question is whether we’ll make it intentionally — with a clear understanding of what the practice becomes and what it leaves behind — or whether we’ll let it happen to us, filling freed capacity with more production because that’s what’s familiar.
To support the intentional path, the full diagnostic system — the Value Preservation Protocol (a checklist for protecting cognitive design values at five project milestones), the OL-Governed Journey Mapping Methodology (the practical method for the cognitive layer work), and the complete Practitioner Transition Guide (skill maps, six-month development paths, and an honest anxiety section) — will be available as companion resources. These are the practitioner tools. They’re not theory. They’re what you use on Monday morning.
But the tools are only as good as the intent behind them. And the intent comes back to something the UX discipline has always known, even when the system didn’t let us practice it: the measure of good design isn’t whether the user completed the task. It’s whether the user is better for having completed it.
We built our discipline on a promise — we put users first. We understand people. We design with empathy and evidence. That promise was genuine. The problem was never the identity. It was the system that prevented its practice.
What AI compression offers is not a new identity but the conditions for the original one. The methodology was real. The conditions for executing it are emerging. The production overhead is being absorbed. The capacity is being freed.
The system turned your methods into rituals. The system is now changing.
What your methods become next — that’s the question I can’t answer for you. But I think the seat was always yours. The room just got a lot bigger.
And maybe the question worth sitting with is this: when the overhead lifts and the capacity returns and the system finally lets you do the work you trained for — will you still remember why you became a designer in the first place?
I think you will. But I think it’s worth asking.
Disclaimer
AI-Assisted Content Disclosure: This article was developed using Claude (research synthesis, structural analysis, and writing collaboration), Gemini Deep Research (six targeted investigations spanning 190+ sources covering UX time allocation, methodology ritualization, Agile integration challenges, Figma’s role in enterprise UX, AI’s impact on design practice, and AI-user capability research), Google NotebookLM (podcast generation), MidJourney (visual concepts), and Descript (audio editing). The OL Practice System — including the diagnostic framework, value preservation protocol, journey mapping methodology, and transition guide — was developed through independent analysis with AI serving as a collaborative thinking partner throughout the process.
Opinion Note: The views, analysis, and diagnostic framework presented here represent the author’s independent exploration and professional experience as a UX practitioner. This should be read as a practitioner’s working diagnosis — informed by extensive research but not peer-reviewed academic output. The honest uncertainties and limitations are discussed openly within the source documents.
Sources and Methodology: The diagnostic claims draw on six targeted research investigations with 190+ sources consulted, including Figma State of Design reports, Maze UX Statistics surveys, State of User Research 2025, Tanya Snook’s UX Theatre framework, ISO 9241-210, academic systematic literature reviews on UX-Agile integration, Design Council surveys, and practitioner community discourse. Counter-evidence was actively sought in each investigation. The research methodology carries acknowledged biases: research prompts were designed around a pre-existing thesis, industry surveys come from UX tooling companies with structural incentive to document pain points, and social media skews toward complaint. The confidence interval is wider than assertive prose might suggest.
Research & sources:
-
- Figma — State of Design reports: https://www.figma.com/blog/
- Maze — UX Statistics & Trends: https://maze.co/resources/ux-statistics/
- State of User Research 2025: https://www.userinterviews.com/state-of-user-research
- Tanya Snook — UX Theatre: https://uxdesign.cc/ux-theatre-35295e3f28e0
- Shopify — Polaris Design System documentation: https://polaris.shopify.com/
- ISO 9241-210 — Human-centred design for interactive systems: https://www.iso.org/standard/77520.html
- Bjork & Bjork — Desirable Difficulties in Learning: https://bjorklab.psych.ucla.edu/research/desirable-difficulties/
- Slamecka & Graf (1978) — The Generation Effect: https://doi.org/10.1016/S0022-5371(78)90370-3
- Parasuraman & Manzey — Automation Complacency and Bias: https://doi.org/10.1518/001872010X12584049986692
- Kosmyna et al. (MIT, 2025) — EEG/fNIRS study on AI-assisted cognitive work: https://www.media.mit.edu/projects/ai-cognitive-engagement/overview/
- Galileo AI — Natural language to UI generation: https://www.usegalileo.ai/
- Relume — AI wireframe and sitemap generation: https://www.relume.io/
- Builder.io — Visual Copilot (design-to-code): https://www.builder.io/m/design-to-code
- v0 by Vercel — AI component generation: https://v0.dev/
Companion resources (The OL practice toolkit):
-
- Design Value Preservation Protocol v1.0: Downloadable PDF/doc
- OL-Governed Journey Mapping Methodology v1.0: Downloadable PDF/doc
- UX Practitioner Transition Guide v1.0: Downloadable PDF/doc
Related Stimulus content:
-
- “The conductor’s problem”: https://www.stimulus.se/2026/03/03/the-conductors-problem-why-everything-you-know-about-ux-is-about-to-become-the-easy-part/
- “The digital dance — Reclaiming our minds”: https://www.stimulus.se/the-digital-dance-reclaiming-our-minds/
- “Analysis paralysis in the AI age”: https://www.stimulus.se/analysis-paralysis-in-the-ai-age/
- “Can interdisciplinary thinking drive the next wave of innovation?”: https://www.stimulus.se/can-interdisciplinary-thinking-drive-the-next-wave-of-innovation/
- “What if we’re building AI consciousness backwards?”: https://www.stimulus.se/what-if-were-building-ai-consciousness-backwards/
































