Universal Interview Framework

An AI-powered interview preparation and candidate assessment toolkit you can adapt to any role, any company, any domain. Load your project context, drop a resume, get a tailored interview plan and post-interview analysis.

Role-Agnostic Claude Code Powered Context-Aware Open Framework
How it works

How It Works

Four steps from job description to hiring decision. Claude Code reads CLAUDE.md + your project context and does the rest.

1

Load Context

Company, role, JD
and key signals

2

Drop a Resume

PDF, DOCX, or
pasted text

3

Interview Plan

Tailored questions
in a Word document

4

Paste Transcript

Full analysis, scoring
& verdict generated

Step 1 — Load Project Context

Drop the following into context/ so Claude understands what role you are hiring for:

  • Company brief (size, industries, market positioning, culture)
  • Role / job description (responsibilities, seniority, success criteria)
  • Required and preferred skills (domain knowledge, certifications, tools)
  • Competitive / domain context (competitors, technology choices, key terminology)
  • Custom scoring criteria (override the defaults if your role needs different signals)
# 1. Clone and open with Claude Code
git clone <your-fork-of-this-repo>
cd universal-interviews
claude

# 2. Edit context/PROJECT_CONTEXT.md with your role + JD
code context/PROJECT_CONTEXT.md

# 3. Drop a resume and ask Claude
cp ~/Downloads/candidate_resume.pdf .
"Generate an interview plan for this candidate"

# 4. After the interview, paste the transcript
"Here is the interview transcript. Analyze it."

What Gets Generated

Claude Code reads your project context plus the framework rules and produces professional deliverables automatically.

📄

Interview Plan (Word)

Tailored questions across 6 sections based on the candidate's resume and your role context. Targets gaps, validates claims, includes red flags and a scoring sheet.

📊

Post-Interview Analysis (Word)

Section-by-section breakdown with direct quotes. Three universal tests: Honesty, Role Readiness, Analogical Thinking. Scored on 9 calibrated criteria with a clear verdict.

👥

Candidate Comparison

Every new candidate is compared against everyone you have already interviewed for the same role. The comparison table updates automatically.

🎤

Audio Transcription

Drop an MP3 or WAV recording and Claude transcribes it with Whisper, then runs the full analysis automatically.

Email Delivery

Interview guides can be emailed to other interviewers via Resend. Formatted HTML with the scoring sheet included.

🤖

Context-Aware

The framework adapts to your role. Whether you are hiring a backend engineer, a sales lead, a data scientist, or a domain specialist, the questions match your context.

Default Scoring Framework

9 criteria scored 1–5, averaged into a final verdict. Override or extend any criterion in your project context.

Criteria What We Evaluate
Domain DepthHands-on knowledge of the role's primary domain (frameworks, tools, methodologies)
Technical / Functional SkillsCertifications, breadth of tooling, depth of execution, architecture or process design
Role-Specific CapabilityThe signature skill of the role (e.g. selling, coding, designing, leading, analyzing)
Verified AchievementsSpecific numbers, decomposed claims, credible project descriptions with scale
Personal vs. BrandDid this person deliver outcomes themselves, or ride a strong company brand?
Competitive AwarenessUnderstanding of alternatives in the market and how to position against them
Culture & Context FitMindset, growth orientation, alignment with the company's stage and style
Honesty & Self-AwarenessVolunteers weaknesses, describes failures, corrects inaccuracies
CommunicationReads the room, calibrates to audience, tells stories with specifics
4.0+
STRONG YES
3.5–3.9
YES
3.0–3.4
YES (conditional)
2.5–2.9
MAYBE
< 2.5
NO

Lessons That Travel Across Roles

Patterns that hold up whether you are hiring an engineer, a seller, a designer, or a domain expert.

1

Honesty is the #1 predictor

Candidates who volunteer weaknesses, describe failures, and correct themselves consistently outperform polished but evasive ones. Authenticity is hard to fake under pressure.

2

The signature-skill question is a litmus test

Every role has one core skill. Ask about it directly. If the candidate pivots away from it repeatedly, they do not have it — regardless of what the resume claims.

3

"Walk me through" beats "Tell me about"

"Walk me through your last release / deal / design / investigation" produces 10x more signal than open-ended "tell me about" prompts. It forces specifics.

4

Numbers separate real contributors from bystanders

Strong candidates can decompose their claims (team size, time period, personal contribution, outcome metric). Weak candidates give round numbers or no numbers at all.

5

Buzzword density inversely correlates with substance

The more "scalable, end-to-end, stakeholder alignment" appears in every answer, the less likely there is real depth. Follow up with: "Give me the actual number."

6

Domain gaps are fixable; personality gaps are not

A candidate with the right mindset can learn a new domain in weeks. A candidate with poor self-awareness or communication will not improve regardless of training.

Repository Structure

Everything is organized for Claude Code to navigate automatically.

universal-interviews/ ├── CLAUDE.md # Framework rules (auto-loaded) ├── README.md # Human-readable guide ├── context/ │ └── PROJECT_CONTEXT.md # Your company + role + JD ├── candidates/ # One folder per candidate ├── transcripts/ # Interview transcripts ├── templates/ # Email templates └── site/ # This landing page