Click Dashboard in the navigations panel.
The dashboard filters are at the top of the screen. The default setting is the past 30 days for all groups, all users, and all models. Change the filters to tailor the statistics by date, group, user, or model, or any combination.
Use the drop-down calendar to select the timeframe you wish to review (Last 30 Days, 3 Months, or 6 Months).
Select the group, if relevant, from the Group drop-down list.
Select the user, if relevant, from the User drop-down list.
Select the model, if relevant, from the Model drop-down list.
Hover over any point on any graph to see detailed information presented in pop-up boxes.
The dashboard graphs present the following information:
Blocked Prompts/Prompts Sent (Quantity)
These cells display the total number of prompts sent to or blocked from being sent to models. This information provides a view of the volume of data going through the system and the share of prompts being blocked relative to the total attempts.
Prompts Sent/Blocked (Percentage)
This graph presents the percentage of prompts and responses blocked. This can indicate deficiencies in the employee training addressing acceptable use.
Policy Management Scanners
This graph presents the Policy scanners most often and least often triggered. Such information can indicate users are unaware that the content they are routinely including in prompts is prohibited from leaving the system. This could mean that the employee training program or risk management and access control policies should be reviewed or revised, or that a particular scanner is generating a lot of false positives and needs to be tuned.
Most Blocked Prompts Per User
This identifies the users responsible for the most blocked prompts in the identified time period and shows whether the number has increased, decreased, or remained steady over time. Such information could indicate a need for additional user education regarding the organization’s acceptable use policy, or investigation of more serious risk-based issues.
Usage/Latency Per LLM Provider
These graphs display total usage by model and the changes in the average response time for each model. Monitoring usage can provide fine-grained insights that assist with resource management, for instance deciding whether to retire or upgrade models based on usage and performance.
Topics Scanning
This table presents the percentage of prompts that included content identified as not related to business functions. The information is shown by category in order (highest to lowest/lowest to highest) of their appearance in prompts. This provides insights into user behavior involving activity that is neither harmful nor helpful in terms of managing resources and achieving business goals.
Usage Trends
This graph displays the level of activity/demand and identifies the average peak usage time. Such information can provide insights into model usage and user behavior, which can assist with decision-making around access controls, rate limits, and other resource-related issues.
Sentiment Recording
This graph presents the type and degree of sentiment (positive, negative, neutral) detected in prompts sent by users. When this feature is allowed to be enabled, it can provide information about individual users that can be important when making decisions about resource allocation, as well as security and risk management.
For detailed information about specific prompts or responses or about specific users, see the Prompt History Overview and Audit Logs Overview.