Methodology
How we measure AI visibility, semantic presence, and brand mentions. Transparent definitions for all custom metrics.
Last updated: 2025-02-08
Ranketize is an AI visibility and brand trust consultancy focused on helping SaaS and digital brands become more likely to be referenced in AI-generated answers (ChatGPT, Perplexity, Google AI Overviews) through ethical, observational methods.
Semantic Presence Score
What It Measures
Semantic Presence Score is a proprietary metric that measures brand authority signals in AI training datasets and publicly accessible sources that LLMs learn from. It quantifies how well-positioned your brand is to be cited by AI models when users ask relevant questions.
Data Inputs
We analyze brand mentions across multiple public web sources that AI models frequently cite:
- •Reddit: Comment threads, AMAs, and discussion mentions
- •Wikipedia: Article citations and references
- •GitHub: Repository descriptions, README files, and documentation
- •Medium & Industry Publications: Article mentions and expert commentary
- •Quora & Forums: Expert answers and community discussions
Scoring Scale & Weighting
Scores range from 0-100, calculated using a weighted approach:
- •Citation Frequency (40%): How often your brand is mentioned in authoritative sources
- •Context Quality (30%): Sentiment and relevance of mentions (positive/neutral > negative)
- •Source Authority (20%): Credibility of platforms where mentions appear
- •Recency (10%): How recent the mentions are (recent > older)
Update Cadence
Semantic Presence Scores are recalculated monthly. We track changes over time to measure improvement in brand authority signals within AI training datasets.
Limitations & Measurement Reality
Important: There is no native "AI analytics" from OpenAI, Google, or other AI providers. Our measurements are proxy-based and directional, not precise counts of AI impressions or mentions.
This metric is proprietary and based on publicly available data. It does not access private training data or proprietary LLM internals. Scores are relative indicators of brand authority in AI-citable sources, not absolute guarantees of AI mention frequency.
Results vary significantly by prompt wording, model version, and time. As research shows, AI responses are highly inconsistent when recommending brands. Repeated sampling (e.g., 100+ runs) is needed for stability.
Actual AI mention rates depend on many factors beyond semantic presence, including model updates, training data changes, and query context.
AI Mentions
What Counts as a Mention
An "AI mention" is a verified citation of your brand name in an LLM's response to a relevant user query. To count as a mention:
- •Brand name must be explicitly stated (not just implied)
- •Mention appears in response to a query relevant to your product/service category
- •Mention is in positive or neutral context (not negative)
- •Response is from a major LLM (ChatGPT, Claude, Perplexity, Gemini)
Verification Workflow
We verify AI mentions using a systematic process:
- Query Generation: We create 50+ variations of target topics relevant to your brand
- Multi-Platform Testing: Each query is tested across ChatGPT, Claude, Perplexity, and Gemini
- Documentation: We screenshot responses containing brand mentions with timestamp and query
- Tracking: Mentions are logged with query text, LLM platform, date, and sentiment
- Reporting: Monthly reports show mention frequency, platforms, and query contexts
Avoiding Cherry-Picked Prompts
We test broad, natural query variations that real users would ask—not highly specific prompts designed to force a mention. Our query sets include common questions, comparison queries, and recommendation requests. We report both positive results (mentions found) and negative results (queries where brand wasn't mentioned) to provide honest context.
"Featured in ChatGPT/Perplexity"
Operational Definition
A brand is "featured" in an LLM when it appears in the model's response to relevant queries. Specifically:
- •What counts: Brand name explicitly mentioned in response to a natural, relevant query
- •What doesn't count: Generic category mentions, implied references, or mentions only in highly specific/prompted queries
- •Context matters: Mention must be in positive or neutral context, not negative
- •Platform specificity: We specify which LLM (ChatGPT, Perplexity, Claude, etc.) featured the brand
Replication Rules
For transparency, we provide exact replication prompts for all "featured" claims:
- •Each case study includes the exact query prompts used to verify mentions
- •Prompts are natural, user-like questions (not engineered to force mentions)
- •Readers can copy-paste prompts to verify claims independently
- •We document the date of verification (LLM responses can change over time)
Important Note
LLM responses are dynamic and can change as models are updated and retrained. A brand that was "featured" in January 2025 may or may not appear in the same query in February 2025. We report mentions at the time of verification and note that results may vary over time.
See These Metrics in Action
Our case studies show how these metrics are applied in real campaigns:
