
Building The Web With Claude
Most of what's being written about AI right now is about tools. Which ones to use, which ones are overhyped, which ones are coming for your job. That's the wrong frame. The teams pulling ahead aren't winning on tool selection, they're winning on judgment: knowing what to hand a model, what guardrails to set, and what decisions only hard-won human experience can make.
At Edgar Allan, we’ve been calling this wisdom work. And it's what Corey Moen, web lead at Anthropic and former Webflow brand team member and I, spent an hour unpacking in a recent live conversation.
Three things worth knowing before you read on:
- The point isn't AI adoption. Teams shaping their workflows around what they want to produce, then configuring AI around those goals are outpacing the ones who adopt tools first and figure out process later.
- Governance is the differentiator. Documented brand standards, clean content models, and a clear review layer aren't nice-to-haves when AI is doing execution at scale. They're what separates quality output from firing a slop cannon.
- Most websites aren't built for what's coming. A site designed to launch once and maintain occasionally is a way different thing than a site built to update continuously. The infrastructure gap is real, and it's only getting wider.
There’s a shift happening that no one’s naming.
A lot has been written in the last year about taste, craft, creativity, and all the human things AI can't replace. Those words are all pointing at something true, but they're really too vague to be useful.
It’s because they describe the outcome, not the practice.
Wisdom work is more precise.
It's the practice of knowing which decisions matter, which don't, and which problems are worth solving in a particular way for a particular audience at a particular moment. And wisdom is accumulated through doing things, making calls, watching them land right…and wrong, then adjusting, and doing it again.
The biggest thing, though: You can't prompt your way into it.
The clearest articulation of this idea comes from a really unlikely place: a 1979 Frank Zappa rock opera. In Joe's Garage, Zappa strings together a hierarchy that's been rattling around our brains at EA lately: "Information is not knowledge, knowledge is not wisdom, wisdom is not truth."
We've spent years as knowledge workers in an information age. Now that we have knowledge tools, the question is, “What comes next?”
I introduced the Zappa hierarchy during my conversation with Corey, and he extended it:
"Wisdom is knowledge from lived experience versus knowledge from books. You didn't cut your teeth to get it. And I think there's a lot of wisdom that is just so vital, not just for our survival as practitioners, but for how we make sense of the world."
That’s the framing to keep in mind as you read this or watch the video of our talk. Everything else flows from it.
The hype is real. So is the noise.
There's a version of AI marketing that's just old marketing with better production: grand claims, vague demos, outcomes that evaporate when you look closely. I brought this up early in my conversation with Corey, noting companies promising SOC 2 compliance in a day, AI website builders that turn out to be offshore teams clicking really fast. You peel it back, and you're like... is this real? Or is it cake?
Corey's take was direct. The tools aren't magic. They're multipliers. What you put in determines what you get back.
We've seen this in practice over and over. The teams getting real traction with AI aren't chasing the newest release. They're asking a different question: not "which AI should we use?" but "what does our workflow actually need to do, and where does AI fit into that?" The reframe matters more than any particular product.
And making that reframe well requires exactly the kind of judgment AI can't provide.
Workflow design is where wisdom is applied.
Here's what happens when you try to teach an AI to do your job: you have to break down everything you know into actual steps. That process is uncomfortable if you've been running on instinct for a decade. But it's also clarifying in a way that very little else is. You surface assumptions you didn't know you were making. You find the parts of your process that were always arbitrary and the parts that were truly load-bearing.
Corey has been operating at this level explicitly. For most of his career, the tool defined the process. You picked a CMS and learned its limits. You picked a design system and built inside its constraints. The platform set the ceiling.
That, he notes, is starting to flip. Tools are now extensible enough that you can define the process first, then shape the tool around it. Which means the most valuable work right now is figuring out how you want to operate. What do you never want to do again? What's tedious? What’s table-stakes? What daily work can you automate without sacrificing judgment?
His own test case makes the theory concrete. Facing a localization crunch on 420 customer stories, all long-form rich text, he built a workflow in Claude Code using a markdown style guide and a CSV glossary file as guardrails. Fourteen sub-agents ran in parallel. Ten million characters, four hours, delivered directly into Webflow via MCP. The wisdom was in the setup: knowing what guardrails the model needed, knowing what to hand it, and what to hold onto. The result is the AI handled the execution, yes, but the judgment was entirely his.
Wisdom work shows up in workflow design before it shows up anywhere else. The question isn't what AI can do. It's what decisions you need to make before AI can do anything useful. That's where experience earns its keep.
Governance isn’t glamorous, but it’s how wisdom scales.
A lot of people saw the Cursor story, where they ditched their CMS for Git and Markdown files, and read it as a canary in the coal mine; that traditional web infrastructure is obsolete.
We read it differently.
That workflow makes complete sense for an engineering-first company where every content change going through a PR review is the norm. For most marketing teams, it would be a slow-motion disaster. Every word change becomes a merge, then deploy, then build. The process is shaped around engineers, and marketing teams would feel every painful step.
The real question isn't whether to get rid of your CMS. It's how your governance layer holds together when AI handles the execution. Who reviews what the model outputs? What's the checkpoint before something publishes? How does brand voice stay consistent across a thousand pages when agents are handling the bulk of the writing?
Rachel Wolan, CPO at Webflow, recently wrote about what she called a "marketing harness." Corey's thinking maps directly onto it: a structured loop layer that routes AI outputs through predefined constraints at every step. Brand voice for copy. The component library for layouts. Analytics and cross-linking data for new pages. The model isn't operating freely. It's operating within constraints you've defined.
A marketing harness is a structured loop layer that routes AI outputs through predefined constraints: brand voice for copy, design systems for layout, performance data for content decisions. Without it, AI scales your volume. With it, AI scales your judgment.
The primitives underneath that harness are what really matter. Your documented brand standards, your design system, your content guidelines written in clean markdown rather than buried in a PDF. Those are what give the model something real to work from. A harness without good primitives is just a fence.
The advice: Build the primitives first, then let the model cook.
The website is becoming a living thing. Most brands aren’t ready.
We've written before about websites moving from static brochures toward something closer to a conversation. This is where that idea gets real concrete.
AI-powered content workflows mean you can update a thousand pages the way you used to update just one. You can pull real customer questions from sales calls and surface that language across your entire site. You can run a brand refresh the way a retail store changes its window display: seasonally, deliberately, quickly. Corey noted that Webflow's own homepage went years between major updates. The team joked about it. They were a website builder that rarely rebuilt their own site. Time and tooling were always the constraints.
Both are changing so fast.
Most sites still aren't built for this kind of cadence. They were designed to be launched, not maintained. But the infrastructure for a living site is different: cleaner CMS architecture, better content models, brand standards documented in a format a model can consistently apply. This connects directly to what we've been tracking in AEO readiness, because a site that can't update at pace is a site that falls behind in AI-generated answers, too.
A site built to launch is optimized for one moment. A site built to live is optimized for every moment that comes after. Those require different architectures, content models, and documentation. Most brands have the first, and very few have the second.
The teams that build this infrastructure now will be in a meaningfully different position in two years.
Design and development in the age of wisdom work.
Corey's take on something like a typical Figma handoff is that it's probably one of the first things to change, not because design doesn't matter, but because the layer between concept and implementation keeps shrinking. In this new world, the smart thing would be to define the direction, establish the system, and then let the model handle the volume.
He described wanting to hand a brief to Claude, get three fully realized page versions back using the brand's voice and component system, then mix and match what works. Anyone who's been in agency life recognizes what happens next: the client Frankensteins the three concepts into a fourth thing. That part doesn't change. The first draft just arrives faster, which means more time for the judgment calls that come after it.
The designers who will struggle are the ones who define themselves by the tool rather than the thinking. The ones who thrive will be able to articulate why something works, not just make it work. Same for writers. Same for developers. Knowing what to build and why it matters is increasingly the whole job. The execution layer is becoming a conversation you have with a model, guided by the judgment you've spent years building.
What wisdom work looks like in practice.
Mason put it plainly: we've spent decades accumulating knowledge about how things work on the web. How campaigns land. Which copy registers with which audiences. Why certain structural decisions cause problems six months later. How to read a client's hesitation in a review meeting and know which direction to push the conversation.
None of that is in any LLM's training data in the specific, particular form you've accumulated it inside your own fleshy head. And all of it becomes more valuable, not less, as AI handles more of the execution.
Corey said it directly: "The more I use it, the more I gain wisdom because of lived experience actually using it, not just believing what people say on the internet. The more I realize: as long as I feed it the right thing, give it the right playground, give it the right environment, this can save me so much time and arguably have better output than I could ever do. But it's not free. It's not just click a button and it's done. That human part, the wisdom work, will always be there."
This connects to something we've been tracking closely in AEO strategy: the brands that show up reliably in AI-generated answers aren't the ones publishing the most. They're the ones whose documented expertise is specific enough, grounded enough, and structured well enough that a model can extract and cite it.
And that’s wisdom, documented.
Wisdom work is the practice of knowing which decisions matter, for which audience, at which moment. It's distinct from knowledge work in that it depends on lived experience you can't shortcut. As AI handles more of the execution layer, wisdom work becomes the primary contribution of anyone who's been doing this long enough to have opinions worth trusting.
What is wisdom work, and how is it different from knowledge work?
Knowledge work is the application of known information: following a process, executing a technique, applying expertise you've learned. Wisdom work is knowing which process matters in a given context, which decisions are worth making carefully, and which can be delegated. It depends on lived experience rather than acquired information, and it's what practitioners bring to AI workflows that the model itself can't provide.
Why does workflow design matter more than tool selection?
For most of the web's history, teams built their workflows around what the tools could do. The CMS set the publishing rules. The design system defined the constraints. AI makes tools extensible enough that you can flip this: define how you want to work first, then configure the tool around that. Teams that do this are finding real efficiency gains. Teams that adopt tools first and figure out the workflow later usually end up with the same problems they started with, just at higher volume.
What is a marketing harness?
A marketing harness is a structured loop layer that routes AI outputs through predefined constraints before they reach production. That might mean brand voice guidelines for copy, a component library for layout decisions, or analytics data for internal linking. Without a harness, AI increases volume. With one, it increases the quality of what you produce at volume. The guardrails are only as good as the documented primitives underneath them.
What does "the website as a living thing" actually require?
CMS architecture designed for ongoing updates, not just a launch. Content models that are relational and filterable. Brand standards documented in a format a model can read and apply. Most sites were built to be launched once and maintained occasionally. A living site has different infrastructure from day one, and most brands haven't made that investment yet.
How does wisdom work connect to AEO?
The brands that show up in AI-generated answers are the ones whose documented expertise is specific, structured, and grounded enough to be extracted and cited. That's wisdom, documented. Publishing volume matters less than the depth and clarity of what you publish. Teams that treat their content as a record of accumulated judgment, not just a traffic strategy, are the ones building AEO equity that compounds.
What's the most practical place to start?
Pick one part of your workflow that's tedious and low-judgment. Give it to a model. Watch what comes back. The output will tell you what guardrails you're missing. Build those. Run it again. That iteration is how you learn where your wisdom is, which is usually a different place than you expect.
There’s a shift happening that no one’s naming.
A lot has been written in the last year about taste, craft, creativity, and all the human things AI can't replace. Those words are all pointing at something true, but they're really too vague to be useful.
It’s because they describe the outcome, not the practice.
Wisdom work is more precise.
It's the practice of knowing which decisions matter, which don't, and which problems are worth solving in a particular way for a particular audience at a particular moment. And wisdom is accumulated through doing things, making calls, watching them land right…and wrong, then adjusting, and doing it again.
The biggest thing, though: You can't prompt your way into it.
The clearest articulation of this idea comes from a really unlikely place: a 1979 Frank Zappa rock opera. In Joe's Garage, Zappa strings together a hierarchy that's been rattling around our brains at EA lately: "Information is not knowledge, knowledge is not wisdom, wisdom is not truth."
We've spent years as knowledge workers in an information age. Now that we have knowledge tools, the question is, “What comes next?”
I introduced the Zappa hierarchy during my conversation with Corey, and he extended it:
"Wisdom is knowledge from lived experience versus knowledge from books. You didn't cut your teeth to get it. And I think there's a lot of wisdom that is just so vital, not just for our survival as practitioners, but for how we make sense of the world."
That’s the framing to keep in mind as you read this or watch the video of our talk. Everything else flows from it.
The hype is real. So is the noise.
There's a version of AI marketing that's just old marketing with better production: grand claims, vague demos, outcomes that evaporate when you look closely. I brought this up early in my conversation with Corey, noting companies promising SOC 2 compliance in a day, AI website builders that turn out to be offshore teams clicking really fast. You peel it back, and you're like... is this real? Or is it cake?
Corey's take was direct. The tools aren't magic. They're multipliers. What you put in determines what you get back.
We've seen this in practice over and over. The teams getting real traction with AI aren't chasing the newest release. They're asking a different question: not "which AI should we use?" but "what does our workflow actually need to do, and where does AI fit into that?" The reframe matters more than any particular product.
And making that reframe well requires exactly the kind of judgment AI can't provide.
Workflow design is where wisdom is applied.
Here's what happens when you try to teach an AI to do your job: you have to break down everything you know into actual steps. That process is uncomfortable if you've been running on instinct for a decade. But it's also clarifying in a way that very little else is. You surface assumptions you didn't know you were making. You find the parts of your process that were always arbitrary and the parts that were truly load-bearing.
Corey has been operating at this level explicitly. For most of his career, the tool defined the process. You picked a CMS and learned its limits. You picked a design system and built inside its constraints. The platform set the ceiling.
That, he notes, is starting to flip. Tools are now extensible enough that you can define the process first, then shape the tool around it. Which means the most valuable work right now is figuring out how you want to operate. What do you never want to do again? What's tedious? What’s table-stakes? What daily work can you automate without sacrificing judgment?
His own test case makes the theory concrete. Facing a localization crunch on 420 customer stories, all long-form rich text, he built a workflow in Claude Code using a markdown style guide and a CSV glossary file as guardrails. Fourteen sub-agents ran in parallel. Ten million characters, four hours, delivered directly into Webflow via MCP. The wisdom was in the setup: knowing what guardrails the model needed, knowing what to hand it, and what to hold onto. The result is the AI handled the execution, yes, but the judgment was entirely his.
Wisdom work shows up in workflow design before it shows up anywhere else. The question isn't what AI can do. It's what decisions you need to make before AI can do anything useful. That's where experience earns its keep.
Governance isn’t glamorous, but it’s how wisdom scales.
A lot of people saw the Cursor story, where they ditched their CMS for Git and Markdown files, and read it as a canary in the coal mine; that traditional web infrastructure is obsolete.
We read it differently.
That workflow makes complete sense for an engineering-first company where every content change going through a PR review is the norm. For most marketing teams, it would be a slow-motion disaster. Every word change becomes a merge, then deploy, then build. The process is shaped around engineers, and marketing teams would feel every painful step.
The real question isn't whether to get rid of your CMS. It's how your governance layer holds together when AI handles the execution. Who reviews what the model outputs? What's the checkpoint before something publishes? How does brand voice stay consistent across a thousand pages when agents are handling the bulk of the writing?
Rachel Wolan, CPO at Webflow, recently wrote about what she called a "marketing harness." Corey's thinking maps directly onto it: a structured loop layer that routes AI outputs through predefined constraints at every step. Brand voice for copy. The component library for layouts. Analytics and cross-linking data for new pages. The model isn't operating freely. It's operating within constraints you've defined.
A marketing harness is a structured loop layer that routes AI outputs through predefined constraints: brand voice for copy, design systems for layout, performance data for content decisions. Without it, AI scales your volume. With it, AI scales your judgment.
The primitives underneath that harness are what really matter. Your documented brand standards, your design system, your content guidelines written in clean markdown rather than buried in a PDF. Those are what give the model something real to work from. A harness without good primitives is just a fence.
The advice: Build the primitives first, then let the model cook.
The website is becoming a living thing. Most brands aren’t ready.
We've written before about websites moving from static brochures toward something closer to a conversation. This is where that idea gets real concrete.
AI-powered content workflows mean you can update a thousand pages the way you used to update just one. You can pull real customer questions from sales calls and surface that language across your entire site. You can run a brand refresh the way a retail store changes its window display: seasonally, deliberately, quickly. Corey noted that Webflow's own homepage went years between major updates. The team joked about it. They were a website builder that rarely rebuilt their own site. Time and tooling were always the constraints.
Both are changing so fast.
Most sites still aren't built for this kind of cadence. They were designed to be launched, not maintained. But the infrastructure for a living site is different: cleaner CMS architecture, better content models, brand standards documented in a format a model can consistently apply. This connects directly to what we've been tracking in AEO readiness, because a site that can't update at pace is a site that falls behind in AI-generated answers, too.
A site built to launch is optimized for one moment. A site built to live is optimized for every moment that comes after. Those require different architectures, content models, and documentation. Most brands have the first, and very few have the second.
The teams that build this infrastructure now will be in a meaningfully different position in two years.
Design and development in the age of wisdom work.
Corey's take on something like a typical Figma handoff is that it's probably one of the first things to change, not because design doesn't matter, but because the layer between concept and implementation keeps shrinking. In this new world, the smart thing would be to define the direction, establish the system, and then let the model handle the volume.
He described wanting to hand a brief to Claude, get three fully realized page versions back using the brand's voice and component system, then mix and match what works. Anyone who's been in agency life recognizes what happens next: the client Frankensteins the three concepts into a fourth thing. That part doesn't change. The first draft just arrives faster, which means more time for the judgment calls that come after it.
The designers who will struggle are the ones who define themselves by the tool rather than the thinking. The ones who thrive will be able to articulate why something works, not just make it work. Same for writers. Same for developers. Knowing what to build and why it matters is increasingly the whole job. The execution layer is becoming a conversation you have with a model, guided by the judgment you've spent years building.
What wisdom work looks like in practice.
Mason put it plainly: we've spent decades accumulating knowledge about how things work on the web. How campaigns land. Which copy registers with which audiences. Why certain structural decisions cause problems six months later. How to read a client's hesitation in a review meeting and know which direction to push the conversation.
None of that is in any LLM's training data in the specific, particular form you've accumulated it inside your own fleshy head. And all of it becomes more valuable, not less, as AI handles more of the execution.
Corey said it directly: "The more I use it, the more I gain wisdom because of lived experience actually using it, not just believing what people say on the internet. The more I realize: as long as I feed it the right thing, give it the right playground, give it the right environment, this can save me so much time and arguably have better output than I could ever do. But it's not free. It's not just click a button and it's done. That human part, the wisdom work, will always be there."
This connects to something we've been tracking closely in AEO strategy: the brands that show up reliably in AI-generated answers aren't the ones publishing the most. They're the ones whose documented expertise is specific enough, grounded enough, and structured well enough that a model can extract and cite it.
And that’s wisdom, documented.
Wisdom work is the practice of knowing which decisions matter, for which audience, at which moment. It's distinct from knowledge work in that it depends on lived experience you can't shortcut. As AI handles more of the execution layer, wisdom work becomes the primary contribution of anyone who's been doing this long enough to have opinions worth trusting.
What is wisdom work, and how is it different from knowledge work?
Knowledge work is the application of known information: following a process, executing a technique, applying expertise you've learned. Wisdom work is knowing which process matters in a given context, which decisions are worth making carefully, and which can be delegated. It depends on lived experience rather than acquired information, and it's what practitioners bring to AI workflows that the model itself can't provide.
Why does workflow design matter more than tool selection?
For most of the web's history, teams built their workflows around what the tools could do. The CMS set the publishing rules. The design system defined the constraints. AI makes tools extensible enough that you can flip this: define how you want to work first, then configure the tool around that. Teams that do this are finding real efficiency gains. Teams that adopt tools first and figure out the workflow later usually end up with the same problems they started with, just at higher volume.
What is a marketing harness?
A marketing harness is a structured loop layer that routes AI outputs through predefined constraints before they reach production. That might mean brand voice guidelines for copy, a component library for layout decisions, or analytics data for internal linking. Without a harness, AI increases volume. With one, it increases the quality of what you produce at volume. The guardrails are only as good as the documented primitives underneath them.
What does "the website as a living thing" actually require?
CMS architecture designed for ongoing updates, not just a launch. Content models that are relational and filterable. Brand standards documented in a format a model can read and apply. Most sites were built to be launched once and maintained occasionally. A living site has different infrastructure from day one, and most brands haven't made that investment yet.
How does wisdom work connect to AEO?
The brands that show up in AI-generated answers are the ones whose documented expertise is specific, structured, and grounded enough to be extracted and cited. That's wisdom, documented. Publishing volume matters less than the depth and clarity of what you publish. Teams that treat their content as a record of accumulated judgment, not just a traffic strategy, are the ones building AEO equity that compounds.
What's the most practical place to start?
Pick one part of your workflow that's tedious and low-judgment. Give it to a model. Watch what comes back. The output will tell you what guardrails you're missing. Build those. Run it again. That iteration is how you learn where your wisdom is, which is usually a different place than you expect.