Trust and Safety with LLM
The Guardrails service enhances the security of LLM-based applications by offering a suite of microservices designed to ensure trustworthiness, safety, and security.
| MicroService | Description |
|---|---|
| Llama Guard | Provides guardrails for inputs and outputs to ensure safe interactions using Llama Guard |
| WildGuard | Provides guardrails for inputs and outputs to ensure safe interactions using WildGuard |
| PII Detection | Detects Personally Identifiable Information (PII) and Business Sensitive Information (BSI) |
| Toxicity Detection | Detects Toxic language (rude, disrespectful, or unreasonable language that is likely to make someone leave a discussion) |
| Bias Detection | Detects Biased language (framing bias, epistemological bias, and demographic bias) |
| Prompt Injection Detection | Detects malicious prompts causing the system running an LLM to execute the attacker’s intentions) |
Additional safety-related microservices will be available soon.