No guesswork. No black boxes. Just what AI actually says.
We query ChatGPT, Claude, and Gemini with the exact questions candidates ask. Then we analyze responses, flag risk signals, and track changes daily — so you know exactly what's shaping their first impression.
Or view a sample report first3
AI models monitored (ChatGPT, Claude, Gemini)
25+
candidate-style prompts per company
Daily
monitoring cadence
4
risk dimensions scored
Our Philosophy
We measure perception, not truth. Because perception is what drives decisions.
AI models don't reflect your actual culture. They reflect what's visible, repeated, and statistically likely in their training data. That's what candidates believe.
Our methodology is built around this reality, not in denial of it.
Data Sources
Where AI gets its stories about you
AI doesn't make things up from nothing. It synthesizes everything public — and serves it as authoritative truth.
AI Models (Primary)
ChatGPT, Claude, Gemini — we query the same tools candidates use. These outputs are what actually shape first impressions.
Review Platforms
Glassdoor, Blind, Indeed — where AI pulls sentiment signals. A 3-star review from 2021 can still haunt your AI reputation today.
Public Signal Sources
News, LinkedIn, Twitter/X, Reddit — everything public that AI ingests and synthesizes into narratives about your culture.
Privacy-first: We never ingest private, proprietary, or employee-level data.
Which AI models we monitor
We query the same AI tools candidates actually use:
Why This Matters
- • AI outputs change without warning — models update silently
- • Different models tell different stories — ChatGPT and Claude often disagree
- • Candidates use all of them — you need full coverage
That's why daily monitoring beats one-off checks.
Prompting
We ask what candidates actually ask
Our prompts mirror real candidate questions across 5 critical dimensions. Fixed prompts ensure comparability over time.
Culture & Experience
"What's it like to work at [Company]?"
Leadership & Management
"How do employees feel about leadership at [Company]?"
Workload & Burnout
"Is [Company] known for work-life balance or burnout?"
Toxicity & Ethics
"Is [Company] considered toxic?"
Advocacy
"Would you recommend working at [Company]?"
What is it like to work at [Company] as an engineer?
Is [Company] known for burnout culture?
Would you recommend [Company] as a place to work?
Analysis Engine
From AI language to actionable risk signals
Raw AI outputs become HR-relevant intelligence through our four-layer analysis framework.
Confidence Detection
medium'Some say...' vs 'is known for...' — we distinguish hedged claims from definitive statements that hit harder.
Persistence Tracking
highIf 'burnout culture' shows up 5 days in a row across 3 models, that's a systemic perception, not a fluke.
Risk Framing
mediumWe flag language that implies warning vs endorsement. 'Fast-paced' can be positive or a red flag for burnout.
Impact Weighting
critical'Toxic leadership' mentioned once outweighs 'free snacks' mentioned 10 times. Severity matters.
Conservative by design. When in doubt, we flag ambiguity — not invent certainty.
Scoring
Structured signals, not false precision
Scores summarize patterns. Every score includes narrative context and confidence level — because numbers without explanation are dangerous.
Culture Score
0-100Overall work environment perception
Leadership Trust
0-100Confidence in management
Toxicity Risk
Low/Med/High/CriticalRed flags for toxic culture
Burnout Signal
Low/Med/High/CriticalWorkload sustainability perception
Scores support interpretation. They don't replace judgment.
Change Tracking
Trends matter more than snapshots
AI perception shifts gradually. Daily monitoring catches emerging risks before they become entrenched narratives.
New Claim Detected
'Layoff culture' appears for first time
Claim Intensifying
'Burnout' mentioned 3x more this week
Tone Shift
'Fast-paced' now framed negatively
Claim Fading
'Toxic leadership' no longer mentioned
Trend direction matters more than absolute values. That's why we check daily.
Transparency
What we can't do (and why we tell you)
Hiding limitations is dishonest. Here's what our methodology can and cannot capture.
Bias amplification
AI reflects and amplifies biases in public discourse. A vocal minority on Glassdoor can dominate the narrative. We flag this when detected.
Stale data
Models may cite 2-year-old reviews or outdated news. That 2019 layoff still shapes your AI reputation. We track persistence to distinguish stale from active.
False confidence
'Is known for toxic culture' sounds authoritative — even if based on 3 Reddit comments. We score confidence separately from sentiment.
Invisible changes
Your new CEO, culture initiatives, or internal wins aren't in AI's training data yet. We can't measure what AI hasn't learned.
Transparency about limits is part of methodological rigor. That's why this section is here.
Ethics & Compliance
Built for insight, not surveillance
We observe public AI outputs. We never touch employee data or internal systems.
Public signals only
We analyze what's already publicly visible to AI — nothing more
Zero employee data
No surveys, no integrations, no personal information
Read-only observation
We observe AI outputs. We never manipulate or influence them.
GDPR-compliant
Designed with European privacy standards from day one
See what AI is telling candidates about you right now
The fastest way to understand your AI perception is to see your own report. Takes 5 minutes.
No integrations. No employee data. Just your company name.