Files
GenAIExamples/comps/guardrails
Liang Lv 631b570481 Refactor guardrails microservice (#1116)
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
2025-01-08 13:29:23 +08:00
..
2024-05-31 17:46:16 +08:00

Trust and Safety with LLM

The Guardrails service enhances the security of LLM-based applications by offering a suite of microservices designed to ensure trustworthiness, safety, and security.

MicroService Description
Llama Guard Provides guardrails for inputs and outputs to ensure safe interactions using Llama Guard
WildGuard Provides guardrails for inputs and outputs to ensure safe interactions using WildGuard
PII Detection Detects Personally Identifiable Information (PII) and Business Sensitive Information (BSI)
Toxicity Detection Detects Toxic language (rude, disrespectful, or unreasonable language that is likely to make someone leave a discussion)
Bias Detection Detects Biased language (framing bias, epistemological bias, and demographic bias)
Prompt Injection Detection Detects malicious prompts causing the system running an LLM to execute the attackers intentions)

Additional safety-related microservices will be available soon.