Pangea AI Guard integration#

The Pangea guardrail uses configurable detection policies (called recipes) from the AI Guard service to identify and mitigate risks in AI application traffic, including:

  • Prompt injection attacks (with over 99% efficacy)

  • 50+ types of PII and sensitive content, with support for custom patterns

  • Toxicity, violence, self-harm, and other unwanted content

  • Malicious links, IPs, and domains

  • 100 spoken languages, with allowlist and denylist controls

All detections are logged in an audit trail for analysis, attribution, and incident response. You can also configure webhooks to trigger alerts for specific detection types.

The following environment variable is required to use the Pangea AI Guard integration:

  • PANGEA_API_TOKEN: Pangea API token with access to the AI Guard service.

You can also optionally set:

  • PANGEA_BASE_URL_TEMPLATE: Template for constructing the base URL for API requests. The {SERVICE_NAME} placeholder will be replaced with the service name slug. Defaults to https://ai-guard.aws.us.pangea.cloud for Pangea’s hosted (SaaS) deployment.

Setup#

Colang v1:

# config.yml

rails:
  config:
    pangea:
      input:
        recipe: pangea_prompt_guard
      output:
        recipe: pangea_llm_response_guard

  input:
    flows:
      - pangea ai guard input

  output:
    flows:
      - pangea ai guard output

Colang v2:

# config.yml

colang_version: "2.x"

rails:
  config:
    pangea:
      input:
        recipe: pangea_prompt_guard
      output:
        recipe: pangea_llm_response_guard
# rails.co

import guardrails
import nemoguardrails.library.pangea

flow input rails $input_text
    pangea ai guard input

flow output rails $output_text
    pangea ai guard output

Next steps#