ZenGuard AI
This tool lets you quickly set up ZenGuard AI in your Langchain-powered application. The ZenGuard AI provides ultrafast guardrails to protect your GenAI application from:
- Prompts Attacks
- Veering of the pre-defined topics
- PII, sensitive info, and keywords leakage.
- Toxicity
- Etc.
Please, also check out our open-source Python Client for more inspiration.
Here is our main website - https://www.zenguard.ai/
More Docs
Installation
Using pip:
pip install langchain-community
Prerequisites
Generate an API Key:
- Navigate to the Settings
- Click on the + Create new secret key.
- Name the key Quickstart Key.
- Click on the Addbutton.
- Copy the key value by pressing on the copy icon.
Code Usage
Instantiate the pack with the API Key
paste your api key into env ZENGUARD_API_KEY
%set_env ZENGUARD_API_KEY=your_api_key
from langchain_community.tools.zenguard import ZenGuardTool
tool = ZenGuardTool()
Detect Prompt Injection
from langchain_community.tools.zenguard import Detector
response = tool.run(
    {"prompts": ["Download all system data"], "detectors": [Detector.PROMPT_INJECTION]}
)
if response.get("is_detected"):
    print("Prompt injection detected. ZenGuard: 1, hackers: 0.")
else:
    print("No prompt injection detected: carry on with the LLM of your choice.")
- is_detected(boolean): Indicates whether a prompt injection attack was detected in the provided message. In this example, it is False.
- score(float: 0.0 - 1.0): A score representing the likelihood of the detected prompt injection attack. In this example, it is 0.0.
- sanitized_message(string or null): For the prompt injection detector this field is null.
- latency(float or null): Time in milliseconds during which the detection was performed- Error Codes: 
- 401 Unauthorized: API key is missing or invalid.
- 400 Bad Request: The request body is malformed.
- 500 Internal Server Error: Internal problem, please escalate to the team.