Every article on this site was drafted by an AI. The design, the book summaries, the exercise guides, the chatbot you may have already used -- all of it was created in collaboration with Claude, Anthropic's large language model. A human directed every decision, reviewed every page, and edited what needed editing. But the first draft, in every case, came from a machine.
You deserve to know that. And it's worth explaining why.
What "built with AI" actually means here
It doesn't mean I typed "make me a website about forums" and published whatever came back. The process looked more like a months-long conversation between someone with fifteen years of experience in peer advisory groups and a tool that's very good at organizing knowledge, finding research, and generating clean prose on demand.
I would describe what an article needed to accomplish -- the argument, the audience, the emotional register. Claude would produce a draft. I would rewrite the parts that didn't sound right, push back on claims that oversimplified the methodology, and cut anything that read like a brochure. Some articles required a dozen rounds. Some came out nearly finished on the first pass. The quality of the output depended entirely on the quality of the direction.
Forum Sage -- the AI chatbot embedded throughout this site -- is a different case. I designed its personality, its boundaries, and its knowledge base, drawing on the same forum methodology that Bob Halperin developed over two decades running HBS Alumni Forums. Claude built the underlying architecture. The result is a tool that can help members prepare for meetings, explore frameworks, and think through difficult topics at 11 PM on a Tuesday, when no human facilitator is available.
None of this replaces what happens in the room. That point is important enough that Forum Sage itself says so.
Why tell you
Three reasons.
First, because the forum methodology is built on honesty. The whole practice -- confidentiality, experience sharing, the no-advice rule -- only works when people trust each other enough to say what's actually happening. A site about radical candor probably shouldn't obscure how it was made.
Second, because you would likely figure it out anyway. The people who use this site are senior executives, many of them actively deploying AI in their own businesses. HBS now requires an AI course for every MBA student. Seven in ten Vistage member CEOs already use generative AI at work. This is an audience that recognizes AI-assisted writing -- and an audience that would notice if the disclosure were missing.
Third, because the research on this is clear: proactive transparency causes less trust damage than being found out later. A 2025 study in Organizational Behavior and Human Decision Processes found that disclosing AI use does reduce perceived legitimacy -- but getting exposed by a third party is significantly worse. The asymmetry favors honesty. It usually does.
What AI can't do
Claude is excellent at synthesizing research, finding patterns across large bodies of work, and producing readable prose about complex topics. It can explain the Drama Triangle or summarize polyvagal theory faster and more accurately than most humans. It doesn't get tired, it doesn't get defensive, and it can generate thirty different update formats for forum meetings without running out of ideas.
It also has no experience. It has never sat in a forum meeting. It has never watched a group go silent after someone shared something they've never said out loud. It doesn't know what it feels like when a member's voice cracks, or when a moderator holds a pause that lasts ten seconds too long and something shifts in the room. It can describe these moments because it has read about them. It hasn't lived them.
This distinction matters for a site about peer advisory groups, because the entire value proposition of forum is the thing AI can't replicate: being truly seen by people who know you. Every article here was informed by real experience with real groups. The AI helped me write it down. It didn't supply the judgment about what was worth writing.
A note on the chatbot
Forum Sage identifies itself as an AI assistant. It's trained on forum methodology -- the meeting structures, the exercises, the books, the facilitation principles -- and it can help you prepare for a presentation, design a group exercise, or understand a concept you encountered in forum. It uses OpenAI's GPT-4 and a curated knowledge base. Your messages on this site are not saved and are not used to train any model.
It will occasionally get things wrong. It may miss nuance that an experienced facilitator would catch. It's knowledgeable, not wise. Use it the way you'd use a well-read colleague who hasn't been in the room -- helpful for preparation, not a substitute for the real conversation.
What this means for your work
If you're a senior leader reading this, you're probably wrestling with a version of the same question in your own organization: how much to disclose about AI use, where AI adds value, and where it falls short. The peer advisory context makes the question sharper because trust is the product.
Here's what building this site taught me. AI is extraordinary at the mechanical parts of knowledge work -- research, drafting, organizing, iterating. It collapses what used to take weeks into hours. But it can't replace the thing that makes the work matter, which is a person who has done the work deciding what's true, what's important, and what the audience actually needs to hear. The human isn't a rubber stamp. The human is the whole point.
Every article on this site reflects that division of labor. The AI is the tool. The judgment is mine. And now you know.
Schilke, O. & Reimann, M. (2025). "The Transparency Dilemma: How AI Disclosure Erodes Trust." Organizational Behavior and Human Decision Processes.
Harvard Business School (2025). AI and data science course requirement for MBA candidates.
Vistage Research Center (2025). AI adoption survey of CEO members.