How to identify AI champions from interview data, not surveys

They are rarely the loudest people in the room. the best way to identify AI champions is a structured interview, not a survey or a manager nomination. What you are looking for is evidence of workflow change: someone who can name the task, explain the method, show the output, and help a peer repeat it. Shallow adoption is often an identification problem, not a supply problem. The people already moving work with AI are often buried inside marketing ops, recruiting, finance, support, or one engineering squad.
Identify AI champions means finding people who have turned AI access into repeatable day-to-day practice. That is capability detection, not sentiment tracking. A good champion does not just like ChatGPT or Claude, and does not need to be the most technical person in the room. As OpenAI Academy notes, AI transformation happens when behaviour changes, and champions make that change visible in real [workflows](/ai-workflows-for-finance-teams-month-end-reporting/).
This is why surveys underperform. McKinsey’s 2025 workplace research found that some employees self-report the most AI experience and enthusiasm, which makes them look like obvious champions on paper. In practice, self-report and manager visibility miss the operator who quietly cut first-draft time for German sales proposals from 90 minutes to 25, or the US HR lead who built a repeatable interview-summary workflow with approval checks. Structured interviews surface those people and separate genuine champions from surface-level users.
TL;DR
- Define a champion as someone who can name the task, show the workflow change, and produce evidence others can repeat, then use that standard instead of enthusiasm, seniority, or tool frequency.
- Replace survey-based selection with structured interviews that ask for the exact task, prompt sequence, output, and downstream checks, and use OpenAI Academy as the behavioural benchmark for what counts as real change.
- Cross-check interview claims against artifacts, approvals, and before/after examples so you can separate repeatable practice from “I use AI a lot” self-reporting and manager-visible noise.
- Scan quiet operators in ops, HR, finance, legal, support, and single squads first, then prioritize the people who have already standardised one recurring, judgment-heavy workflow over the loudest generalists.
- Use the hidden-champion pattern to seed workshops, peer demos, and a champions program, then re-interview quarterly to see which workflows spread and which users stay at surface level.
Why do surveys and nominations miss the real AI champions?
Because surveys, nominations, and usage logs mostly measure attention, confidence, or visibility, not changed work.
McKinsey’s 2025 workplace report notes that some employees self-report the most AI experience and enthusiasm, but that still tells you nothing about whether they rebuilt a workflow others can copy McKinsey, 2025. “I use AI a lot” is a weak signal. A stronger one is whether someone can describe the exact task they changed, the prompt structure they use, and the output they trust enough to ship.
Manager nominations have a different bias: they over-select articulate, visible people. Useful if you need evangelists, not enough if you need practitioners with judgment. The best internal examples often come from quieter operators in ops, HR, finance, legal, or customer success who took one repetitive but judgment-heavy task and made it repeatable.
Tool logs are narrower still. They show frequency, not quality. Someone can generate hundreds of prompts and still be stuck at rewrite-and-summarise. Another person might use AI less often but have embedded it into one critical recurring process. Those two users look similar in licence dashboards and completely different in real adoption.
That hidden-champion pattern matters. In a 180-person Hamburg SaaS team we worked with, interviews surfaced three people already operating above their cohort baseline while survey “power users” were mostly experimenting in isolation.
What should you decide before you start identifying AI champions?
Start by fixing the decision criteria, or you will select the people who talk best about AI rather than the ones who changed work.
-
Define the job of the champion. A peer trainer, a workflow builder, and a candidate for a formal champions program are not the same profile. Peer trainers need credibility and the ability to explain judgment calls. Workflow builders need repeatability: they can show the task, prompt structure, checks, and output they trust enough to ship. Formal-program candidates also need cross-team influence. OpenAI Academy is explicit: AI transformation happens when behaviour changes, not when a tool is merely available.
-
Choose the unit of analysis before you interview anyone. If you score only individuals, you miss team conditions. If you score only teams, you miss the quiet operator who rebuilt one critical workflow. Finance, engineering, HR, and customer success produce different champion signatures. GitHub’s playbook makes the same point: adoption is a change-management problem, not a tech problem (GitHub Resources).
-
Set the evidence standard before the first interview. Weight observed behaviour, verified examples, and artifacts above opinions. Ask for the revised brief, prompt template, meeting summary, QA checklist, or reusable workflow.
-
Narrow the workflow scope. Pick work that matters: support replies, campaign briefs, hiring screens, meeting prep, ticket triage.
How do you identify AI champions with structured interviews?
Use a short, evidence-first interview loop.
-
Start with a mixed sample, not just the usual enthusiasts. Ask for volunteers, then deliberately add people close to operational work: an ops coordinator, a marketing manager, an HR generalist, someone in finance or legal. Hidden champions often sit where repetitive knowledge work meets messy context.
-
Ask for one workflow change only. Do not ask “how do you use AI?” Ask: what exact task changed, what was the old process, what is the new process, and what output got better. The strongest candidates answer with something narrow and teachable.
-
Probe the method, not the vibe. Ask for prompt structure, context inputs, review steps, and guardrails. Also ask what they no longer trust the model to do. That boundary is often the clearest sign of judgment.
-
Verify with artifacts. Request a prompt, template, saved output, process note, or before/after example. If they cannot show anything, classify them cautiously.
-
Score the interview. Separate surface users from growing users, then mark champions as the people who combine repeatability, output judgment, and peer pull. Finish with one simple question: “who already asks you for help with this?” Informal help patterns often identify the real champions before the org chart does.
What does good evidence look like in real teams?
Good evidence is operational, not self-descriptive. The person you want is not the one saying they “use AI every day”; it is the one who can show that one task now moves with less rework, tighter variance, or a shorter cycle time because their workflow changed. The strongest signal is a paired pattern: a changed method plus a stable quality bar.
The non-obvious place to look is functions where work is repetitive but judgment-heavy: ops coordinators standardising handoffs, HR generalists drafting policy variants, finance leads producing first-pass analysis, legal teams structuring clause comparisons. Those roles often produce better champion evidence than the loudest product or engineering users because the output standard is easier to inspect.
| weak evidence | strong evidence |
|---|---|
| “I prompt a lot” | “I use the same prompt structure for ticket triage, then verify against source notes” |
| generic rewrite/summarise use | one workflow with a clear review standard |
| experimentation in isolation | reusable method taught to peers |
| tool familiarity | trusted output that is regularly shipped |
There is also a management tell here. McKinsey’s 2025 global survey found AI high performers were three times more likely than peers to strongly agree that senior leaders showed ownership and commitment to AI initiatives McKinsey, 2025. Once you find a few credible examples, they become the seed for office hours, team-specific workshops, and peer proof that the rollout is about changed work, not more licenses.
Bottom line
The best AI champions are the people who can prove a workflow changed, not the people who say they use AI a lot. Stop using surveys or manager nominations to find them; run structured interviews, verify the task/output evidence, and seed workshops or a champions program from the quiet operators already doing the work.
If you’re already seeing the pattern in your interview data - a few people using AI well, most stuck at surface-level prompting, and no clear way to turn that into a team-wide lift - that’s the gap we work on. We use the same voice interviews to surface the champions, map where the workflow breaks, and turn that into a practical next step, whether that’s a diagnostic or a targeted enablement plan.
Your team has AI tools but adoption is shallow? We measure it and fix it. Book a diagnostic call -> calendar.app.Google or email hi@AI-Beavers.com
FAQ
What interview questions identify AI champions best?
The strongest questions force a person to reconstruct a real work episode, not describe their opinion about AI. Ask for the exact task, the inputs they used, what changed in the workflow, and the check they used before shipping the output. If they cannot name a repeatable failure mode or the point where human review still matters, they are usually describing experimentation rather than a stable practice.
How do you verify someone is an AI champion and not just good at talking about AI?
Look for artifacts that can be checked independently, such as a prompt template, a before/after sample, a review checklist, or a shared doc that others on the team actually use. A real champion usually leaves operational traces in the workflow, not just a strong self-description in an interview. If possible, ask a second person in the same team to describe the same process and compare the two accounts for consistency.
What are signs of a real AI champion in a team?
Real champions usually reduce rework, standardize a recurring task, or make a judgment-heavy step faster without lowering quality. They also tend to be the person others quietly ask for help, even if they are not the most senior or visible employee. Another useful signal is whether their method can be taught in under an hour and still produce similar results for a peer. - how to run an AI [hackathon](/how-to-[judge](/how-to-judge-hackathon-complete-guide/)-hackathon/) that produces usable prototypes