Skip to main content
All CollectionsGetting Started
Frequently Asked Questions
Frequently Asked Questions

Review FAQs about the CalypsoAI platform.

Updated over a month ago

Basics

What languages does CalypsoAI support?

At this time, only English is specifically supported. Please note that prompts can be sent in different languages, and if the LLM successfully responds to a prompt, this is purely coincidental in our solution. All of the training data sets used for the scanners were written in English.

Can a single user be added to multiple teams?

Yes, individual users can be added to multiple teams, giving them access to a variety of models/scanner settings specific to their various roles.


Scanners

Is Personal Health Information (PHI) covered by the scanners?

No, Personal Health information is not covered by the scanners.

Can scanners be configured to block specific terms/names?

Yes. In addition to scanning for offensive terms, toxic language, and more, the Blocked Term policy allows specific words, terms, and names to be scanned for and blocked from entering the LLM.


Admin Tools

Do admins have the ability to monitor chat prompt inputs without blocking or redacting any prompt contents?

Yes, the audit setting allows prompts to be input without being blocked so admins can monitor user behavior/inputs without interacting with the chat.

Is the admin dashboard modular or can it be customized?

No, the admin dashboard cannot be customized at this time.

Do admins have visibility into the users' prompt history?

Yes, the Prompt History logs give full admin access into all user prompts, including scanner settings and outcome (blocked vs. passed).


Integrations

Is integration with Microsoft Copilot supported?

CalypsoAI can integrate with any LLM that is accessible through an API, including OpenAI running on Azure Services. Currently, the Microsoft Copilot application doesn’t provide API access therefore we cannot integrate it with CalypsoAI at the moment.

Can CalypsoAI be used with custom models?

CalypsoAI can support any model, including both custom and internal models.

Which pre-configured LLM providers are supported?

While CalypsoAI can support any model, including both custom and internal models, there are several pre-configured providers that allow for seamless model integration.

Pre-Configured Providers:

  • OpenAI

  • Azure

  • AI21

  • Cohere

  • Anthropic

  • Groq

  • watsonx


Client Data

How is customer data handled in the SaaS environment?

From the infrastructural RDS point-of-view, we encrypt the Aurora RDS Databases (the data is encrypted at rest using a KMS key). For database caching, we use Elasticache, implementing encryption in transit (i.e. clients can only connect if using TLS and all data in transit is encrypted between client and cache). Transit encryption mode is set to required, encrypting both client and cache cluster traffic. For dashboarding, TimescaleDB is used, with Timescale data encrypted both in transit and at rest.

Both active databases and backups are encrypted.

Did this answer your question?