LLM Metrics and Advanced Concepts
Getting StartedMetrics
Metrics across AccuLLM are either averages, counts, or sums across LLM engines, depending on what fits the metric.
For example, on the prompt list, you will see the average rank for a prompt across engines. Click the arrow next to a prompt to view ranks per engine.
You can filter by a single engine using the LLM engine filter. A switch in the top-right corner on all pages also lets you select an engine. When this filter is active, all statistics in the platform will reflect only the selected engine.
In the following sections, we go through the metrics you find across AccuLLM.
General Metrics
In the first section, we explore metrics found in multiple places in the platform, such as the prompt list, the dashboard, and the competitors tab.
Rank
Your brand’s rank is based on when it is mentioned in the prompt response. You are ranked against competitors defined for your domain, including unpinned ones. A domain can have up to 25 competitors. By default, AccuLLM adds up to 20 competitors to a new domain, which you can manage during the initial brand creation process and later under the Competitor tab.
A brand is detected in one of two ways:
- If the brand domain is present in the text. This match is case-insensitive. For example, the spelling “adidas.com” will match “Adidas.com is a great website”.
- If any of the defined brand spellings appear in the text. These matches are case-sensitive. For example, the spelling “foobar” will not match “FooBar is a great product”.
To be included in ranking, brands must be defined as competitors. You can add, edit, delete, or pin competitors in the Competitors tab. Changes only apply to future prompt responses; past responses are not re-evaluated.
Simplified example
Your brand: Nike
Your competitors: Adidas, Puma
Prompt response
Some of the best sneakers are made by Reebok, but Adidas also makes very nice sneakers. Nike's sneakers are also awesome. The sneakers from Asics are also great, and the same can be said for Puma.
Here, Reebok and Asics are not defined as competitors, so they do not count. Adidas ranks 1, Nike 2, and Puma 3. If Reebok is later added as a competitor, it will only affect future prompt responses, not this one.
Visibility
Visibility is 0% or 100% per engine, based on whether your brand appears in a prompt response. A 0% visibility means no rank. In the prompt list, the visibility shown is the average across engines. If your brand is visible in 3 out of 4 engines, visibility is 75%. Each engine is weighed equally in this calculation. If you are interested in statistics for a specific engine, you can apply the LLM engine filter.
Sentiment
After ranking, we analyze the sentiment of the text related to your brand. This includes both direct mentions and the surrounding context. For example, if your brand appears in a list, the heading may be included to determine sentiment. If it appears in a table, we also consider the header row or column. All relevant snippets are combined into a single sentiment score per prompt, ranging from 1 (least favorable) to 100 (most favorable).
KPIs
Mentions
A mention is recorded every time your brand is mentioned in a prompt response, based on matching your defined brand spellings. You can have multiple mentions for one prompt on one LLM engine.
Citations
A citation is recorded every time your domain is cited in a prompt response. You can have multiple citations for one prompt on one LLM engine.
Prompt List Metrics
The following metrics are found on the prompt list only.
Top Brands
Top Brands shows which brand was ranked 1 (i.e., mentioned first) for each LLM engine on a given prompt. Each icon represents one engine. You will see four identical icons if the same brand is mentioned first across the four engines.
Rank is based on mention order — the first matching competitor (or your own brand) mentioned is rank 1.
Clicking the Top Brands cell opens a popover showing the top 5 ranked brands per engine for that prompt.
Expanding the aggregated row shows which brand was mentioned first on each LLM engine.
Top Sources
Top Sources shows which source was cited first (i.e., citation rank 1) for each LLM engine on a given prompt. Each icon represents one engine. You will see four identical icons if the same source is cited first across four engines.
Citation rank is based on the order in which citations are used — the first source cited has citation rank 1.
Clicking the Top Sources cell opens a popover showing that prompt's top 5 cited sources per engine.
Expanding the aggregated row shows which source was mentioned first on each engine.
Source Count
The Source count is the number of unique sources used in the prompt responses across all LLM engines in the aggregated row in the prompts list.
Expanding the aggregated row shows the source count for each LLM engine. Note that adding all LLM engine source counts would not necessarily add up to the aggregated row count, as some sources might be used for more than one prompt response.
Brand Count
The Brand count is the number of unique brands mentioned in the prompt responses across all LLM engines in the aggregated row in the prompts list.
Expanding the aggregated row shows the brand count for each LLM engine. Note that adding all LLM engine brand counts would not necessarily add up to the aggregated row count, as some brands might be referenced in more than one prompt response.
Sources Metrics
The following metrics are found on the Sources tab and in the Sources tables on the dashboard.
Average Citation Rank
This shows the average position at which a source is cited across prompts. Lower values mean the source tends to be mentioned earlier or more prominently. This is not to be confused with rank, which is for brand mentions. Citation rank looks strictly at the sources referenced in the prompt response.
Cited
Cited shows the percentage of prompts in which a source is cited at least once. It is calculated as the number of unique prompts (across all LLM engines) in which the source appears divided by the total number of prompts.
Filtering
Filters generally work on aggregated metrics. For example, filtering by rank < 2 shows prompts where the average rank across engines is below 2.
If you also apply an LLM engine filter, it applies first. For instance, to see ChatGPT prompts where rank < 2, apply both filters: one for LLM engine = ChatGPT and one for rank < 2. The LLM engine filter is in the filter bar and in the top-right switch in all tabs.
LLM Engine Support by Country
Not all LLM engines are available in all countries, as they are not supported there yet. Google has generally not rolled out AI mode in EU countries (and some EEA countries), and France has not yet received AI overviews in Google searches.
Prompt Suggestions
AccuLLM generates prompt suggestions in several steps. Here is how the process works:
1. Evaluating existing brand knowledge
First, we check what the LLMs already know about your brand. This helps us to establish whether the model already has sufficient understanding or if additional context is needed.
2. Scraping Google for supplemental information
We scrape Google’s top organic search results related to your brand to close any gaps. This ensures our suggestions are grounded in up-to-date and publicly available information.
3. Generating a domain description
Using the collected data, we create a concise summary of your brand or domain. This description serves as a foundation for generating context-aware suggestions.
4. Generating candidate categories
We generate multiple category candidates for your brand using different prompting techniques. These categories represent possible areas of user interest or content themes relevant to your domain.
5. Selecting the best categories
We evaluate and score the generated categories based on their relevance to your brand and target region. Only the strongest candidates are selected for the next steps.
6. Generating prompt suggestions
For each selected category, we generate suggestions using several prompt templates. Each template targets a specific search intent: navigational, informational, commercial, or transactional. This helps create suggestions that match different user needs.
7. Assigning search intents
Each suggestion is classified into a specific search intent: navigational, informational, commercial, or transactional. This classification is done in a separate step to ensure the assigned intent is accurate, regardless of which prompt template generated the suggestion. You can use this classification to filter the list of suggestions.
After these steps, your finalized prompt suggestions are prepared and displayed. They are ready for review, refinement, or immediate use.
Hint: If you like a prompt but want to tweak the wording slightly, you can add it for import and then edit the text before saving.
Related help guides
AccuLLM
Still need help?
Customer support
Our live support team is ready to assist you with any issues.