I've spent fifteen years building knowledge infrastructure and content programs across Microsoft, VMware, Blizzard, and the European Parliament. I don't separate "documentation strategy" from "how I operate as a content leader." They're the same thing. This piece is both: ten moves that matter, plus a window into how I structure programs—from conception through iteration. If you're building a documentation program or trying to save one that's fading into invisibility, this is my playbook.
Move 1: Start with Visibility
Visibility is foundational to everything else. If no one can see what you do, the rest doesn't matter.
At Microsoft, I started by building dashboards. Not dashboards for me—dashboards for leadership. Content coverage (which product areas had docs, which had gaps), freshness metrics (when was this last reviewed?), consumption data (who reads what, how often, where do people abandon?). We pulled this through Azure DevOps and CMS APIs.
The moment leadership could see the work, the conversation shifted. Suddenly we weren't fighting for credibility. We were fighting for resources to do more of what was already working.
This is always move one in any program I build. You cannot advocate for something people cannot see.
Move 2: Measure Value, Not Just Output
Output is easy. It's seductive. "We published 47 articles this quarter." But nobody cares.
At Blizzard, I connected documentation metrics to support ticket data. Turns out, 25% of calls could have been prevented with better docs. At roughly $25 per call. Suddenly the math was impossible to ignore. That's not a nice-to-have. That's a cost center that's actually a revenue protection function.
Industry benchmarks exist. Start there. Share them. Often someone will surface the actual numbers just to correct you, and now you have real data for the next budget conversation.
In every program I structure, move two is always: find the money trail. Where is the pain? What's it costing? Prove the docs address it.
Move 3: Produce Content That Works for AI as Well as Humans
This is non-negotiable now. LLMs are a new type of reader, and they expose everything you've been getting away with.
Models will find inconsistencies humans gloss over. They'll surface outdated information living in your system for years. They'll struggle with content that's coherent to humans but structurally chaotic to algorithms. At VMware, as AI tools hit the market, I realized the content audit—the boring work of cleaning house—became urgent.
Your documentation needs to be good enough for AI systems to use it. That's a higher bar than "humans can figure it out."
When I structure a content program now, this is built in from the beginning. Not as an add-on. As a foundational requirement.
Move 4: Educate Your Team on AI
Your team needs to know how LLMs interact with structured content, what metadata matters, how to write for both human comprehension and model parsing.
Teams that do this early get a massive advantage. Teams that don't wonder why their carefully crafted content doesn't work as a knowledge base for AI.
In every program I build, I make sure people understand the intersection of content structure, metadata, and how systems (human and AI) will actually use what they're building.
Move 5: Own Accountability for Content. Especially Automated Content
Assign clear ownership. This matters especially for anything generated through automated processes.
Who's responsible for accuracy? Who reviews? Who updates when it drifts? I learned this the hard way. When you automate content creation, you also automate your ability to amplify errors at scale. Automation without accountability is just fast failure.
Someone needs to be on the hook—not for writing everything, but for ensuring quality doesn't become a second-order concern.
This is why I structure programs with explicit ownership models. Not roles that create content in a vacuum. Roles with accountability for what ships.
Move 6: Orchestrate More, But Always with a Human in Control
Automate the grunt work. Never automate the judgment call.
There are real gains in automating information gathering, tagging, content generation, and workflow orchestration. The efficiency is there. The mistake is thinking the human becomes optional.
Keep humans in quality control, accuracy review, and decision points that matter. Automate everything else. This is how you scale without breaking.
Every program I've built has this built in: humans decide what matters. Systems do the work.
Move 7: Communicate Your Why Repeatedly, Through Multiple Channels
Data and dashboards don't sell themselves. You need to communicate the why behind what you're doing.
Create a dedicated resource—a website, a single source of truth—that explains your documentation strategy in plain language. What's the end goal? What are the challenges you're solving? How do people get on board? What are your timelines and priorities?
Then layer on regular communication: Q&A sessions, brown bag sessions, office hours. Give people repeated opportunities to understand not just what you're doing, but why it matters to them.
This is where alignment happens. At Microsoft, this was a dedicated intranet space. At Blizzard, it was monthly all-hands updates. The format varies. The discipline is constant.
Move 8: Show Impact, Not Just Output
Output is easy to measure. Impact is what people actually care about.
Show the stories. Show the support ticket that didn't happen because docs were there. Show the onboarding time that dropped. Show the team that shipped faster because they didn't waste time hunting for information.
Connect your documentation work to business outcomes and user outcomes in ways people can see themselves in.
Output is "we wrote 47 articles this quarter." Impact is "we reduced time-to-productivity for new hires by 30%, which means we're saving 60 hours per cohort."
In every program I structure, I build impact tracking in from the beginning. Not as an afterthought.
Move 9: Manage Change Structurally
When documentation practices shift—new tooling, new processes, new standards—treat it like actual change management.
Use established change management models (Kotter's or similar) to guide transitions. Communicate the vision. Build coalitions. Address resistance. Celebrate wins.
Don't assume people will just adopt new ways because they're better. People resist change even when it's good. Structure your change management accordingly.
This has been critical every time I've restructured a program. You can have the perfect strategy, but if people don't adopt it, it's just a document.
Move 10: Build a Feedback Loop: Listen, Learn, Iterate
You've made the work visible. You've proven value. You've communicated why. Now listen.
Your documentation program isn't a set-it-and-forget-it initiative. It's a cycle. You need systematic feedback from users, from your team, and from stakeholders about what's working and what isn't.
What content is actually helping people? Where are they getting stuck? What gaps are emerging? Are your changes landing the way you intended?
Set up mechanisms to gather this feedback: analytics on what content gets used and abandoned, surveys, user interviews, regular retrospectives with your team. Then act on it. Iterate.
The teams that get better are the ones that close the loop. They implement moves 1-9, then listen. Then they adjust. Then they listen again.
How This Manifests in Real Programs
At Microsoft, this looked like: - Visible dashboards (Move 1) feeding quarterly business cases (Move 2) - A documented AI-readiness initiative (Move 3-4) with explicit ownership (Move 5) - A CMS governance model that automated routine work but kept humans in editorial decisions (Move 6) - An internal wiki explaining why we were standardizing on certain tools and processes (Move 7) - Metrics tied to support cost reduction and time-to-productivity (Move 8) - Planned migrations with coalition-building before tooling changes (Move 9) - Monthly retrospectives and user feedback sessions (Move 10)
At VMware, the program looked different in structure but followed the same logic: - Visibility into content gaps and freshness (Moves 1-2) - A content audit that prepared us for AI tooling (Move 3) - Training on how to write for both humans and systems (Move 4) - Clear ownership of content areas with accountability (Move 5) - Automation of tagging and metadata assignment (Move 6) - Regular communication about why we were cleaning house (Move 7) - Impact tracking tied to SME onboarding time and knowledge reuse (Move 8) - Change management around new processes (Move 9) - Ongoing feedback loops that shaped what we prioritized next (Move 10)
This isn't theory. This is how you keep documentation from becoming invisible.
Why This Matters Right Now
Documentation is under pressure from every direction: budget scrutiny, AI disruption, organizational sprawl, and competing priorities. The teams that survive these pressures aren't the ones writing better individual articles. They're the ones who became strategically indispensable.
These ten moves are how you do that.
Start with visibility, measurement, and communication. Make the work visible, prove the value, and help people understand why. Then build the feedback loop that keeps you improving.
That's the program. That's how I operate. That's how teams that matter actually work.