Configuration File Reference#

Module for the configuration of rails.

pydantic model nemoguardrails.rails.llm.config.ActionRails#

Configuration of action rails.

Action rails control various options related to the execution of actions. Currently, only

In the future multiple options will be added, e.g., what input validation should be performed per action, output validation, throttling, disabling, etc.

Show JSON schema
{
   "title": "ActionRails",
   "description": "Configuration of action rails.\n\nAction rails control various options related to the execution of actions.\nCurrently, only\n\nIn the future multiple options will be added, e.g., what input validation should be\nperformed per action, output validation, throttling, disabling, etc.",
   "type": "object",
   "properties": {
      "instant_actions": {
         "anyOf": [
            {
               "items": {
                  "type": "string"
               },
               "type": "array"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The names of all actions which should finish instantly.",
         "title": "Instant Actions"
      }
   }
}

Fields:
field instant_actions: List[str] | None = None#

The names of all actions which should finish instantly.

pydantic model nemoguardrails.rails.llm.config.AutoAlignOptions#

List of guardrails that are activated

Show JSON schema
{
   "title": "AutoAlignOptions",
   "description": "List of guardrails that are activated",
   "type": "object",
   "properties": {
      "guardrails_config": {
         "description": "The guardrails configuration that is passed to the AutoAlign endpoint",
         "title": "Guardrails Config",
         "type": "object"
      }
   }
}

Fields:
field guardrails_config: Dict[str, Any] [Optional]#

The guardrails configuration that is passed to the AutoAlign endpoint

pydantic model nemoguardrails.rails.llm.config.AutoAlignRailConfig#

Configuration data for the AutoAlign API

Show JSON schema
{
   "title": "AutoAlignRailConfig",
   "description": "Configuration data for the AutoAlign API",
   "type": "object",
   "properties": {
      "parameters": {
         "title": "Parameters",
         "type": "object"
      },
      "input": {
         "$ref": "#/$defs/AutoAlignOptions",
         "description": "Input configuration for AutoAlign guardrails"
      },
      "output": {
         "$ref": "#/$defs/AutoAlignOptions",
         "description": "Output configuration for AutoAlign guardrails"
      }
   },
   "$defs": {
      "AutoAlignOptions": {
         "description": "List of guardrails that are activated",
         "properties": {
            "guardrails_config": {
               "description": "The guardrails configuration that is passed to the AutoAlign endpoint",
               "title": "Guardrails Config",
               "type": "object"
            }
         },
         "title": "AutoAlignOptions",
         "type": "object"
      }
   }
}

Fields:
field input: AutoAlignOptions [Optional]#

Input configuration for AutoAlign guardrails

field output: AutoAlignOptions [Optional]#

Output configuration for AutoAlign guardrails

field parameters: Dict[str, Any] [Optional]#
pydantic model nemoguardrails.rails.llm.config.CoreConfig#

Settings for core internal mechanics.

Show JSON schema
{
   "title": "CoreConfig",
   "description": "Settings for core internal mechanics.",
   "type": "object",
   "properties": {
      "embedding_search_provider": {
         "$ref": "#/$defs/EmbeddingSearchProvider",
         "description": "The search provider used to search the most similar canonical forms/flows."
      }
   },
   "$defs": {
      "EmbeddingSearchProvider": {
         "description": "Configuration of a embedding search provider.",
         "properties": {
            "name": {
               "default": "default",
               "description": "The name of the embedding search provider. If not specified, default is used.",
               "title": "Name",
               "type": "string"
            },
            "parameters": {
               "title": "Parameters",
               "type": "object"
            },
            "cache": {
               "$ref": "#/$defs/EmbeddingsCacheConfig"
            }
         },
         "title": "EmbeddingSearchProvider",
         "type": "object"
      },
      "EmbeddingsCacheConfig": {
         "description": "Configuration for the caching embeddings.",
         "properties": {
            "enabled": {
               "default": false,
               "description": "Whether caching of the embeddings should be enabled or not.",
               "title": "Enabled",
               "type": "boolean"
            },
            "key_generator": {
               "default": "md5",
               "description": "The method to use for generating the cache keys.",
               "title": "Key Generator",
               "type": "string"
            },
            "store": {
               "default": "filesystem",
               "description": "What type of store to use for the cached embeddings.",
               "title": "Store",
               "type": "string"
            },
            "store_config": {
               "description": "Any additional configuration options required for the store. For example, path for `filesystem` or `host`/`port`/`db` for redis.",
               "title": "Store Config",
               "type": "object"
            }
         },
         "title": "EmbeddingsCacheConfig",
         "type": "object"
      }
   }
}

Fields:
field embedding_search_provider: EmbeddingSearchProvider [Optional]#

The search provider used to search the most similar canonical forms/flows.

pydantic model nemoguardrails.rails.llm.config.DialogRails#

Configuration of topical rails.

Show JSON schema
{
   "title": "DialogRails",
   "description": "Configuration of topical rails.",
   "type": "object",
   "properties": {
      "single_call": {
         "$ref": "#/$defs/SingleCallConfig",
         "description": "Configuration for the single LLM call option."
      },
      "user_messages": {
         "$ref": "#/$defs/UserMessagesConfig"
      }
   },
   "$defs": {
      "SingleCallConfig": {
         "description": "Configuration for the single LLM call option for topical rails.",
         "properties": {
            "enabled": {
               "default": false,
               "title": "Enabled",
               "type": "boolean"
            },
            "fallback_to_multiple_calls": {
               "default": true,
               "description": "Whether to fall back to multiple calls if a single call is not possible.",
               "title": "Fallback To Multiple Calls",
               "type": "boolean"
            }
         },
         "title": "SingleCallConfig",
         "type": "object"
      },
      "UserMessagesConfig": {
         "description": "Configuration for how the user messages are interpreted.",
         "properties": {
            "embeddings_only": {
               "default": false,
               "description": "Whether to use only embeddings for computing the user canonical form messages.",
               "title": "Embeddings Only",
               "type": "boolean"
            },
            "embeddings_only_similarity_threshold": {
               "anyOf": [
                  {
                     "maximum": 1.0,
                     "minimum": 0.0,
                     "type": "number"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The similarity threshold to use when using only embeddings for computing the user canonical form messages.",
               "title": "Embeddings Only Similarity Threshold"
            },
            "embeddings_only_fallback_intent": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Defines the fallback intent when the similarity is below the threshold. If set to None, the user intent is computed normally using the LLM. If set to a string value, that string is used as the intent.",
               "title": "Embeddings Only Fallback Intent"
            }
         },
         "title": "UserMessagesConfig",
         "type": "object"
      }
   }
}

Fields:
field single_call: SingleCallConfig [Optional]#

Configuration for the single LLM call option.

field user_messages: UserMessagesConfig [Optional]#

Configuration for how the user messages are interpreted.

pydantic model nemoguardrails.rails.llm.config.Document#

Configuration for documents that should be used for question answering.

Show JSON schema
{
   "title": "Document",
   "description": "Configuration for documents that should be used for question answering.",
   "type": "object",
   "properties": {
      "format": {
         "title": "Format",
         "type": "string"
      },
      "content": {
         "title": "Content",
         "type": "string"
      }
   },
   "required": [
      "format",
      "content"
   ]
}

Fields:
field content: str [Required]#
field format: str [Required]#
pydantic model nemoguardrails.rails.llm.config.EmbeddingSearchProvider#

Configuration of a embedding search provider.

Show JSON schema
{
   "title": "EmbeddingSearchProvider",
   "description": "Configuration of a embedding search provider.",
   "type": "object",
   "properties": {
      "name": {
         "default": "default",
         "description": "The name of the embedding search provider. If not specified, default is used.",
         "title": "Name",
         "type": "string"
      },
      "parameters": {
         "title": "Parameters",
         "type": "object"
      },
      "cache": {
         "$ref": "#/$defs/EmbeddingsCacheConfig"
      }
   },
   "$defs": {
      "EmbeddingsCacheConfig": {
         "description": "Configuration for the caching embeddings.",
         "properties": {
            "enabled": {
               "default": false,
               "description": "Whether caching of the embeddings should be enabled or not.",
               "title": "Enabled",
               "type": "boolean"
            },
            "key_generator": {
               "default": "md5",
               "description": "The method to use for generating the cache keys.",
               "title": "Key Generator",
               "type": "string"
            },
            "store": {
               "default": "filesystem",
               "description": "What type of store to use for the cached embeddings.",
               "title": "Store",
               "type": "string"
            },
            "store_config": {
               "description": "Any additional configuration options required for the store. For example, path for `filesystem` or `host`/`port`/`db` for redis.",
               "title": "Store Config",
               "type": "object"
            }
         },
         "title": "EmbeddingsCacheConfig",
         "type": "object"
      }
   }
}

Fields:
field cache: EmbeddingsCacheConfig [Optional]#
field name: str = 'default'#

The name of the embedding search provider. If not specified, default is used.

field parameters: Dict[str, Any] [Optional]#
pydantic model nemoguardrails.rails.llm.config.EmbeddingsCacheConfig#

Configuration for the caching embeddings.

Show JSON schema
{
   "title": "EmbeddingsCacheConfig",
   "description": "Configuration for the caching embeddings.",
   "type": "object",
   "properties": {
      "enabled": {
         "default": false,
         "description": "Whether caching of the embeddings should be enabled or not.",
         "title": "Enabled",
         "type": "boolean"
      },
      "key_generator": {
         "default": "md5",
         "description": "The method to use for generating the cache keys.",
         "title": "Key Generator",
         "type": "string"
      },
      "store": {
         "default": "filesystem",
         "description": "What type of store to use for the cached embeddings.",
         "title": "Store",
         "type": "string"
      },
      "store_config": {
         "description": "Any additional configuration options required for the store. For example, path for `filesystem` or `host`/`port`/`db` for redis.",
         "title": "Store Config",
         "type": "object"
      }
   }
}

Fields:
field enabled: bool = False#

Whether caching of the embeddings should be enabled or not.

field key_generator: str = 'md5'#

The method to use for generating the cache keys.

field store: str = 'filesystem'#

What type of store to use for the cached embeddings.

field store_config: Dict[str, Any] [Optional]#

Any additional configuration options required for the store. For example, path for filesystem or host/port/db for redis.

to_dict()#
pydantic model nemoguardrails.rails.llm.config.FactCheckingRailConfig#

Configuration data for the fact-checking rail.

Show JSON schema
{
   "title": "FactCheckingRailConfig",
   "description": "Configuration data for the fact-checking rail.",
   "type": "object",
   "properties": {
      "parameters": {
         "title": "Parameters",
         "type": "object"
      },
      "fallback_to_self_check": {
         "default": false,
         "description": "Whether to fall back to self-check if another method fail.",
         "title": "Fallback To Self Check",
         "type": "boolean"
      }
   }
}

Fields:
field fallback_to_self_check: bool = False#

Whether to fall back to self-check if another method fail.

field parameters: Dict[str, Any] [Optional]#
pydantic model nemoguardrails.rails.llm.config.InputRails#

Configuration of input rails.

Show JSON schema
{
   "title": "InputRails",
   "description": "Configuration of input rails.",
   "type": "object",
   "properties": {
      "flows": {
         "description": "The names of all the flows that implement input rails.",
         "items": {
            "type": "string"
         },
         "title": "Flows",
         "type": "array"
      }
   }
}

Fields:
field flows: List[str] [Optional]#

The names of all the flows that implement input rails.

pydantic model nemoguardrails.rails.llm.config.Instruction#

Configuration for instructions in natural language that should be passed to the LLM.

Show JSON schema
{
   "title": "Instruction",
   "description": "Configuration for instructions in natural language that should be passed to the LLM.",
   "type": "object",
   "properties": {
      "type": {
         "title": "Type",
         "type": "string"
      },
      "content": {
         "title": "Content",
         "type": "string"
      }
   },
   "required": [
      "type",
      "content"
   ]
}

Fields:
field content: str [Required]#
field type: str [Required]#
pydantic model nemoguardrails.rails.llm.config.JailbreakDetectionConfig#

Configuration data for jailbreak detection.

Show JSON schema
{
   "title": "JailbreakDetectionConfig",
   "description": "Configuration data for jailbreak detection.",
   "type": "object",
   "properties": {
      "server_endpoint": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The endpoint for the jailbreak detection heuristics server.",
         "title": "Server Endpoint"
      },
      "length_per_perplexity_threshold": {
         "default": 89.79,
         "description": "The length/perplexity threshold.",
         "title": "Length Per Perplexity Threshold",
         "type": "number"
      },
      "prefix_suffix_perplexity_threshold": {
         "default": 1845.65,
         "description": "The prefix/suffix perplexity threshold.",
         "title": "Prefix Suffix Perplexity Threshold",
         "type": "number"
      },
      "embedding": {
         "default": "nvidia/nv-embedqa-e5-v5",
         "description": "Model to use for embedding-based detections.",
         "title": "Embedding",
         "type": "string"
      }
   }
}

Fields:
field embedding: str = 'nvidia/nv-embedqa-e5-v5'#

Model to use for embedding-based detections.

field length_per_perplexity_threshold: float = 89.79#

The length/perplexity threshold.

field prefix_suffix_perplexity_threshold: float = 1845.65#

The prefix/suffix perplexity threshold.

field server_endpoint: str | None = None#

The endpoint for the jailbreak detection heuristics server.

pydantic model nemoguardrails.rails.llm.config.KnowledgeBaseConfig#

Show JSON schema
{
   "title": "KnowledgeBaseConfig",
   "type": "object",
   "properties": {
      "folder": {
         "default": "kb",
         "description": "The folder from which the documents should be loaded.",
         "title": "Folder",
         "type": "string"
      },
      "embedding_search_provider": {
         "$ref": "#/$defs/EmbeddingSearchProvider",
         "description": "The search provider used to search the knowledge base."
      }
   },
   "$defs": {
      "EmbeddingSearchProvider": {
         "description": "Configuration of a embedding search provider.",
         "properties": {
            "name": {
               "default": "default",
               "description": "The name of the embedding search provider. If not specified, default is used.",
               "title": "Name",
               "type": "string"
            },
            "parameters": {
               "title": "Parameters",
               "type": "object"
            },
            "cache": {
               "$ref": "#/$defs/EmbeddingsCacheConfig"
            }
         },
         "title": "EmbeddingSearchProvider",
         "type": "object"
      },
      "EmbeddingsCacheConfig": {
         "description": "Configuration for the caching embeddings.",
         "properties": {
            "enabled": {
               "default": false,
               "description": "Whether caching of the embeddings should be enabled or not.",
               "title": "Enabled",
               "type": "boolean"
            },
            "key_generator": {
               "default": "md5",
               "description": "The method to use for generating the cache keys.",
               "title": "Key Generator",
               "type": "string"
            },
            "store": {
               "default": "filesystem",
               "description": "What type of store to use for the cached embeddings.",
               "title": "Store",
               "type": "string"
            },
            "store_config": {
               "description": "Any additional configuration options required for the store. For example, path for `filesystem` or `host`/`port`/`db` for redis.",
               "title": "Store Config",
               "type": "object"
            }
         },
         "title": "EmbeddingsCacheConfig",
         "type": "object"
      }
   }
}

Fields:
field embedding_search_provider: EmbeddingSearchProvider [Optional]#

The search provider used to search the knowledge base.

field folder: str = 'kb'#

The folder from which the documents should be loaded.

pydantic model nemoguardrails.rails.llm.config.LogAdapterConfig#

Show JSON schema
{
   "title": "LogAdapterConfig",
   "type": "object",
   "properties": {
      "name": {
         "default": "FileSystem",
         "description": "The name of the adapter.",
         "title": "Name",
         "type": "string"
      }
   },
   "additionalProperties": true
}

Config:
  • extra: str = allow

Fields:
field name: str = 'FileSystem'#

The name of the adapter.

pydantic model nemoguardrails.rails.llm.config.MessageTemplate#

Template for a message structure.

Show JSON schema
{
   "title": "MessageTemplate",
   "description": "Template for a message structure.",
   "type": "object",
   "properties": {
      "type": {
         "description": "The type of message, e.g., 'assistant', 'user', 'system'.",
         "title": "Type",
         "type": "string"
      },
      "content": {
         "description": "The content of the message.",
         "title": "Content",
         "type": "string"
      }
   },
   "required": [
      "type",
      "content"
   ]
}

Fields:
field content: str [Required]#

The content of the message.

field type: str [Required]#

The type of message, e.g., ‘assistant’, ‘user’, ‘system’.

pydantic model nemoguardrails.rails.llm.config.Model#

Configuration of a model used by the rails engine.

Typically, the main model is configured e.g.: {

“type”: “main”, “engine”: “openai”, “model”: “gpt-3.5-turbo-instruct”

}

Show JSON schema
{
   "title": "Model",
   "description": "Configuration of a model used by the rails engine.\n\nTypically, the main model is configured e.g.:\n{\n    \"type\": \"main\",\n    \"engine\": \"openai\",\n    \"model\": \"gpt-3.5-turbo-instruct\"\n}",
   "type": "object",
   "properties": {
      "type": {
         "title": "Type",
         "type": "string"
      },
      "engine": {
         "title": "Engine",
         "type": "string"
      },
      "model": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The name of the model. If not specified, it should be specified through the parameters attribute.",
         "title": "Model"
      },
      "parameters": {
         "title": "Parameters",
         "type": "object"
      }
   },
   "required": [
      "type",
      "engine"
   ]
}

Fields:
field engine: str [Required]#
field model: str | None = None#

The name of the model. If not specified, it should be specified through the parameters attribute.

field parameters: Dict[str, Any] [Optional]#
field type: str [Required]#
pydantic model nemoguardrails.rails.llm.config.OutputRails#

Configuration of output rails.

Show JSON schema
{
   "title": "OutputRails",
   "description": "Configuration of output rails.",
   "type": "object",
   "properties": {
      "flows": {
         "description": "The names of all the flows that implement output rails.",
         "items": {
            "type": "string"
         },
         "title": "Flows",
         "type": "array"
      }
   }
}

Fields:
field flows: List[str] [Optional]#

The names of all the flows that implement output rails.

pydantic model nemoguardrails.rails.llm.config.PatronusEvaluateApiParams#

Config to parameterize the Patronus Evaluate API call

Show JSON schema
{
   "title": "PatronusEvaluateApiParams",
   "description": "Config to parameterize the Patronus Evaluate API call",
   "type": "object",
   "properties": {
      "success_strategy": {
         "anyOf": [
            {
               "$ref": "#/$defs/PatronusEvaluationSuccessStrategy"
            },
            {
               "type": "null"
            }
         ],
         "default": "all_pass",
         "description": "Strategy to determine whether the Patronus Evaluate API Guardrail passes or not."
      },
      "params": {
         "description": "Parameters to the Patronus Evaluate API",
         "title": "Params",
         "type": "object"
      }
   },
   "$defs": {
      "PatronusEvaluationSuccessStrategy": {
         "description": "Strategy for determining whether a Patronus Evaluation API\nrequest should pass, especially when multiple evaluators\nare called in a single request.\nALL_PASS requires all evaluators to pass for success.\nANY_PASS requires only one evaluator to pass for success.",
         "enum": [
            "all_pass",
            "any_pass"
         ],
         "title": "PatronusEvaluationSuccessStrategy",
         "type": "string"
      }
   }
}

Fields:
field params: Dict[str, Any] [Optional]#

Parameters to the Patronus Evaluate API

field success_strategy: PatronusEvaluationSuccessStrategy | None = PatronusEvaluationSuccessStrategy.ALL_PASS#

Strategy to determine whether the Patronus Evaluate API Guardrail passes or not.

pydantic model nemoguardrails.rails.llm.config.PatronusEvaluateConfig#

Config for the Patronus Evaluate API call

Show JSON schema
{
   "title": "PatronusEvaluateConfig",
   "description": "Config for the Patronus Evaluate API call",
   "type": "object",
   "properties": {
      "evaluate_config": {
         "$ref": "#/$defs/PatronusEvaluateApiParams",
         "description": "Configuration passed to the Patronus Evaluate API"
      }
   },
   "$defs": {
      "PatronusEvaluateApiParams": {
         "description": "Config to parameterize the Patronus Evaluate API call",
         "properties": {
            "success_strategy": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusEvaluationSuccessStrategy"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "all_pass",
               "description": "Strategy to determine whether the Patronus Evaluate API Guardrail passes or not."
            },
            "params": {
               "description": "Parameters to the Patronus Evaluate API",
               "title": "Params",
               "type": "object"
            }
         },
         "title": "PatronusEvaluateApiParams",
         "type": "object"
      },
      "PatronusEvaluationSuccessStrategy": {
         "description": "Strategy for determining whether a Patronus Evaluation API\nrequest should pass, especially when multiple evaluators\nare called in a single request.\nALL_PASS requires all evaluators to pass for success.\nANY_PASS requires only one evaluator to pass for success.",
         "enum": [
            "all_pass",
            "any_pass"
         ],
         "title": "PatronusEvaluationSuccessStrategy",
         "type": "string"
      }
   }
}

Fields:
field evaluate_config: PatronusEvaluateApiParams [Optional]#

Configuration passed to the Patronus Evaluate API

class nemoguardrails.rails.llm.config.PatronusEvaluationSuccessStrategy(value)#

Strategy for determining whether a Patronus Evaluation API request should pass, especially when multiple evaluators are called in a single request. ALL_PASS requires all evaluators to pass for success. ANY_PASS requires only one evaluator to pass for success.

pydantic model nemoguardrails.rails.llm.config.PatronusRailConfig#

Configuration data for the Patronus Evaluate API

Show JSON schema
{
   "title": "PatronusRailConfig",
   "description": "Configuration data for the Patronus Evaluate API",
   "type": "object",
   "properties": {
      "input": {
         "anyOf": [
            {
               "$ref": "#/$defs/PatronusEvaluateConfig"
            },
            {
               "type": "null"
            }
         ],
         "description": "Patronus Evaluate API configuration for an Input Guardrail"
      },
      "output": {
         "anyOf": [
            {
               "$ref": "#/$defs/PatronusEvaluateConfig"
            },
            {
               "type": "null"
            }
         ],
         "description": "Patronus Evaluate API configuration for an Output Guardrail"
      }
   },
   "$defs": {
      "PatronusEvaluateApiParams": {
         "description": "Config to parameterize the Patronus Evaluate API call",
         "properties": {
            "success_strategy": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusEvaluationSuccessStrategy"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "all_pass",
               "description": "Strategy to determine whether the Patronus Evaluate API Guardrail passes or not."
            },
            "params": {
               "description": "Parameters to the Patronus Evaluate API",
               "title": "Params",
               "type": "object"
            }
         },
         "title": "PatronusEvaluateApiParams",
         "type": "object"
      },
      "PatronusEvaluateConfig": {
         "description": "Config for the Patronus Evaluate API call",
         "properties": {
            "evaluate_config": {
               "$ref": "#/$defs/PatronusEvaluateApiParams",
               "description": "Configuration passed to the Patronus Evaluate API"
            }
         },
         "title": "PatronusEvaluateConfig",
         "type": "object"
      },
      "PatronusEvaluationSuccessStrategy": {
         "description": "Strategy for determining whether a Patronus Evaluation API\nrequest should pass, especially when multiple evaluators\nare called in a single request.\nALL_PASS requires all evaluators to pass for success.\nANY_PASS requires only one evaluator to pass for success.",
         "enum": [
            "all_pass",
            "any_pass"
         ],
         "title": "PatronusEvaluationSuccessStrategy",
         "type": "string"
      }
   }
}

Fields:
field input: PatronusEvaluateConfig | None [Optional]#

Patronus Evaluate API configuration for an Input Guardrail

field output: PatronusEvaluateConfig | None [Optional]#

Patronus Evaluate API configuration for an Output Guardrail

pydantic model nemoguardrails.rails.llm.config.PrivateAIDetection#

Configuration for Private AI.

Show JSON schema
{
   "title": "PrivateAIDetection",
   "description": "Configuration for Private AI.",
   "type": "object",
   "properties": {
      "server_endpoint": {
         "default": "http://localhost:8080/process/text",
         "description": "The endpoint for the private AI detection server.",
         "title": "Server Endpoint",
         "type": "string"
      },
      "input": {
         "$ref": "#/$defs/PrivateAIDetectionOptions",
         "description": "Configuration of the entities to be detected on the user input."
      },
      "output": {
         "$ref": "#/$defs/PrivateAIDetectionOptions",
         "description": "Configuration of the entities to be detected on the bot output."
      },
      "retrieval": {
         "$ref": "#/$defs/PrivateAIDetectionOptions",
         "description": "Configuration of the entities to be detected on retrieved relevant chunks."
      }
   },
   "$defs": {
      "PrivateAIDetectionOptions": {
         "description": "Configuration options for Private AI.",
         "properties": {
            "entities": {
               "description": "The list of entities that should be detected.",
               "items": {
                  "type": "string"
               },
               "title": "Entities",
               "type": "array"
            }
         },
         "title": "PrivateAIDetectionOptions",
         "type": "object"
      }
   }
}

Fields:
field input: PrivateAIDetectionOptions [Optional]#

Configuration of the entities to be detected on the user input.

field output: PrivateAIDetectionOptions [Optional]#

Configuration of the entities to be detected on the bot output.

field retrieval: PrivateAIDetectionOptions [Optional]#

Configuration of the entities to be detected on retrieved relevant chunks.

field server_endpoint: str = 'http://localhost:8080/process/text'#

The endpoint for the private AI detection server.

pydantic model nemoguardrails.rails.llm.config.PrivateAIDetectionOptions#

Configuration options for Private AI.

Show JSON schema
{
   "title": "PrivateAIDetectionOptions",
   "description": "Configuration options for Private AI.",
   "type": "object",
   "properties": {
      "entities": {
         "description": "The list of entities that should be detected.",
         "items": {
            "type": "string"
         },
         "title": "Entities",
         "type": "array"
      }
   }
}

Fields:
field entities: List[str] [Optional]#

The list of entities that should be detected.

pydantic model nemoguardrails.rails.llm.config.Rails#

Configuration of specific rails.

Show JSON schema
{
   "title": "Rails",
   "description": "Configuration of specific rails.",
   "type": "object",
   "properties": {
      "config": {
         "$ref": "#/$defs/RailsConfigData"
      },
      "input": {
         "$ref": "#/$defs/InputRails",
         "description": "Configuration of the input rails."
      },
      "output": {
         "$ref": "#/$defs/OutputRails",
         "description": "Configuration of the output rails."
      },
      "retrieval": {
         "$ref": "#/$defs/RetrievalRails",
         "description": "Configuration of the retrieval rails."
      },
      "dialog": {
         "$ref": "#/$defs/DialogRails",
         "description": "Configuration of the dialog rails."
      },
      "actions": {
         "$ref": "#/$defs/ActionRails",
         "description": "Configuration of action rails."
      }
   },
   "$defs": {
      "ActionRails": {
         "description": "Configuration of action rails.\n\nAction rails control various options related to the execution of actions.\nCurrently, only\n\nIn the future multiple options will be added, e.g., what input validation should be\nperformed per action, output validation, throttling, disabling, etc.",
         "properties": {
            "instant_actions": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The names of all actions which should finish instantly.",
               "title": "Instant Actions"
            }
         },
         "title": "ActionRails",
         "type": "object"
      },
      "AutoAlignOptions": {
         "description": "List of guardrails that are activated",
         "properties": {
            "guardrails_config": {
               "description": "The guardrails configuration that is passed to the AutoAlign endpoint",
               "title": "Guardrails Config",
               "type": "object"
            }
         },
         "title": "AutoAlignOptions",
         "type": "object"
      },
      "AutoAlignRailConfig": {
         "description": "Configuration data for the AutoAlign API",
         "properties": {
            "parameters": {
               "title": "Parameters",
               "type": "object"
            },
            "input": {
               "$ref": "#/$defs/AutoAlignOptions",
               "description": "Input configuration for AutoAlign guardrails"
            },
            "output": {
               "$ref": "#/$defs/AutoAlignOptions",
               "description": "Output configuration for AutoAlign guardrails"
            }
         },
         "title": "AutoAlignRailConfig",
         "type": "object"
      },
      "DialogRails": {
         "description": "Configuration of topical rails.",
         "properties": {
            "single_call": {
               "$ref": "#/$defs/SingleCallConfig",
               "description": "Configuration for the single LLM call option."
            },
            "user_messages": {
               "$ref": "#/$defs/UserMessagesConfig"
            }
         },
         "title": "DialogRails",
         "type": "object"
      },
      "FactCheckingRailConfig": {
         "description": "Configuration data for the fact-checking rail.",
         "properties": {
            "parameters": {
               "title": "Parameters",
               "type": "object"
            },
            "fallback_to_self_check": {
               "default": false,
               "description": "Whether to fall back to self-check if another method fail.",
               "title": "Fallback To Self Check",
               "type": "boolean"
            }
         },
         "title": "FactCheckingRailConfig",
         "type": "object"
      },
      "InputRails": {
         "description": "Configuration of input rails.",
         "properties": {
            "flows": {
               "description": "The names of all the flows that implement input rails.",
               "items": {
                  "type": "string"
               },
               "title": "Flows",
               "type": "array"
            }
         },
         "title": "InputRails",
         "type": "object"
      },
      "JailbreakDetectionConfig": {
         "description": "Configuration data for jailbreak detection.",
         "properties": {
            "server_endpoint": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The endpoint for the jailbreak detection heuristics server.",
               "title": "Server Endpoint"
            },
            "length_per_perplexity_threshold": {
               "default": 89.79,
               "description": "The length/perplexity threshold.",
               "title": "Length Per Perplexity Threshold",
               "type": "number"
            },
            "prefix_suffix_perplexity_threshold": {
               "default": 1845.65,
               "description": "The prefix/suffix perplexity threshold.",
               "title": "Prefix Suffix Perplexity Threshold",
               "type": "number"
            },
            "embedding": {
               "default": "nvidia/nv-embedqa-e5-v5",
               "description": "Model to use for embedding-based detections.",
               "title": "Embedding",
               "type": "string"
            }
         },
         "title": "JailbreakDetectionConfig",
         "type": "object"
      },
      "OutputRails": {
         "description": "Configuration of output rails.",
         "properties": {
            "flows": {
               "description": "The names of all the flows that implement output rails.",
               "items": {
                  "type": "string"
               },
               "title": "Flows",
               "type": "array"
            }
         },
         "title": "OutputRails",
         "type": "object"
      },
      "PatronusEvaluateApiParams": {
         "description": "Config to parameterize the Patronus Evaluate API call",
         "properties": {
            "success_strategy": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusEvaluationSuccessStrategy"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "all_pass",
               "description": "Strategy to determine whether the Patronus Evaluate API Guardrail passes or not."
            },
            "params": {
               "description": "Parameters to the Patronus Evaluate API",
               "title": "Params",
               "type": "object"
            }
         },
         "title": "PatronusEvaluateApiParams",
         "type": "object"
      },
      "PatronusEvaluateConfig": {
         "description": "Config for the Patronus Evaluate API call",
         "properties": {
            "evaluate_config": {
               "$ref": "#/$defs/PatronusEvaluateApiParams",
               "description": "Configuration passed to the Patronus Evaluate API"
            }
         },
         "title": "PatronusEvaluateConfig",
         "type": "object"
      },
      "PatronusEvaluationSuccessStrategy": {
         "description": "Strategy for determining whether a Patronus Evaluation API\nrequest should pass, especially when multiple evaluators\nare called in a single request.\nALL_PASS requires all evaluators to pass for success.\nANY_PASS requires only one evaluator to pass for success.",
         "enum": [
            "all_pass",
            "any_pass"
         ],
         "title": "PatronusEvaluationSuccessStrategy",
         "type": "string"
      },
      "PatronusRailConfig": {
         "description": "Configuration data for the Patronus Evaluate API",
         "properties": {
            "input": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusEvaluateConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Patronus Evaluate API configuration for an Input Guardrail"
            },
            "output": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusEvaluateConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Patronus Evaluate API configuration for an Output Guardrail"
            }
         },
         "title": "PatronusRailConfig",
         "type": "object"
      },
      "PrivateAIDetection": {
         "description": "Configuration for Private AI.",
         "properties": {
            "server_endpoint": {
               "default": "http://localhost:8080/process/text",
               "description": "The endpoint for the private AI detection server.",
               "title": "Server Endpoint",
               "type": "string"
            },
            "input": {
               "$ref": "#/$defs/PrivateAIDetectionOptions",
               "description": "Configuration of the entities to be detected on the user input."
            },
            "output": {
               "$ref": "#/$defs/PrivateAIDetectionOptions",
               "description": "Configuration of the entities to be detected on the bot output."
            },
            "retrieval": {
               "$ref": "#/$defs/PrivateAIDetectionOptions",
               "description": "Configuration of the entities to be detected on retrieved relevant chunks."
            }
         },
         "title": "PrivateAIDetection",
         "type": "object"
      },
      "PrivateAIDetectionOptions": {
         "description": "Configuration options for Private AI.",
         "properties": {
            "entities": {
               "description": "The list of entities that should be detected.",
               "items": {
                  "type": "string"
               },
               "title": "Entities",
               "type": "array"
            }
         },
         "title": "PrivateAIDetectionOptions",
         "type": "object"
      },
      "RailsConfigData": {
         "description": "Configuration data for specific rails that are supported out-of-the-box.",
         "properties": {
            "fact_checking": {
               "$ref": "#/$defs/FactCheckingRailConfig"
            },
            "autoalign": {
               "$ref": "#/$defs/AutoAlignRailConfig",
               "description": "Configuration data for the AutoAlign guardrails API."
            },
            "patronus": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusRailConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Configuration data for the Patronus Evaluate API."
            },
            "sensitive_data_detection": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/SensitiveDataDetection"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Configuration for detecting sensitive data."
            },
            "jailbreak_detection": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/JailbreakDetectionConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Configuration for jailbreak detection."
            },
            "privateai": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PrivateAIDetection"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Configuration for Private AI."
            }
         },
         "title": "RailsConfigData",
         "type": "object"
      },
      "RetrievalRails": {
         "description": "Configuration of retrieval rails.",
         "properties": {
            "flows": {
               "description": "The names of all the flows that implement retrieval rails.",
               "items": {
                  "type": "string"
               },
               "title": "Flows",
               "type": "array"
            }
         },
         "title": "RetrievalRails",
         "type": "object"
      },
      "SensitiveDataDetection": {
         "description": "Configuration of what sensitive data should be detected.",
         "properties": {
            "recognizers": {
               "description": "Additional custom recognizers. Check out https://microsoft.github.io/presidio/tutorial/08_no_code/ for more details.",
               "items": {
                  "type": "object"
               },
               "title": "Recognizers",
               "type": "array"
            },
            "input": {
               "$ref": "#/$defs/SensitiveDataDetectionOptions",
               "description": "Configuration of the entities to be detected on the user input."
            },
            "output": {
               "$ref": "#/$defs/SensitiveDataDetectionOptions",
               "description": "Configuration of the entities to be detected on the bot output."
            },
            "retrieval": {
               "$ref": "#/$defs/SensitiveDataDetectionOptions",
               "description": "Configuration of the entities to be detected on retrieved relevant chunks."
            }
         },
         "title": "SensitiveDataDetection",
         "type": "object"
      },
      "SensitiveDataDetectionOptions": {
         "properties": {
            "entities": {
               "description": "The list of entities that should be detected. Check out https://microsoft.github.io/presidio/supported_entities/ forthe list of supported entities.",
               "items": {
                  "type": "string"
               },
               "title": "Entities",
               "type": "array"
            },
            "mask_token": {
               "default": "*",
               "description": "The token that should be used to mask the sensitive data.",
               "title": "Mask Token",
               "type": "string"
            },
            "score_threshold": {
               "default": 0.2,
               "description": "The score threshold that should be used to detect the sensitive data.",
               "title": "Score Threshold",
               "type": "number"
            }
         },
         "title": "SensitiveDataDetectionOptions",
         "type": "object"
      },
      "SingleCallConfig": {
         "description": "Configuration for the single LLM call option for topical rails.",
         "properties": {
            "enabled": {
               "default": false,
               "title": "Enabled",
               "type": "boolean"
            },
            "fallback_to_multiple_calls": {
               "default": true,
               "description": "Whether to fall back to multiple calls if a single call is not possible.",
               "title": "Fallback To Multiple Calls",
               "type": "boolean"
            }
         },
         "title": "SingleCallConfig",
         "type": "object"
      },
      "UserMessagesConfig": {
         "description": "Configuration for how the user messages are interpreted.",
         "properties": {
            "embeddings_only": {
               "default": false,
               "description": "Whether to use only embeddings for computing the user canonical form messages.",
               "title": "Embeddings Only",
               "type": "boolean"
            },
            "embeddings_only_similarity_threshold": {
               "anyOf": [
                  {
                     "maximum": 1.0,
                     "minimum": 0.0,
                     "type": "number"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The similarity threshold to use when using only embeddings for computing the user canonical form messages.",
               "title": "Embeddings Only Similarity Threshold"
            },
            "embeddings_only_fallback_intent": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Defines the fallback intent when the similarity is below the threshold. If set to None, the user intent is computed normally using the LLM. If set to a string value, that string is used as the intent.",
               "title": "Embeddings Only Fallback Intent"
            }
         },
         "title": "UserMessagesConfig",
         "type": "object"
      }
   }
}

Fields:
field actions: ActionRails [Optional]#

Configuration of action rails.

field config: RailsConfigData [Optional]#

Configuration data for specific rails that are supported out-of-the-box.

field dialog: DialogRails [Optional]#

Configuration of the dialog rails.

field input: InputRails [Optional]#

Configuration of the input rails.

field output: OutputRails [Optional]#

Configuration of the output rails.

field retrieval: RetrievalRails [Optional]#

Configuration of the retrieval rails.

pydantic model nemoguardrails.rails.llm.config.RailsConfig#

Configuration object for the models and the rails.

TODO: add typed config for user_messages, bot_messages, and flows.

Show JSON schema
{
   "title": "RailsConfig",
   "description": "Configuration object for the models and the rails.\n\nTODO: add typed config for user_messages, bot_messages, and flows.",
   "type": "object",
   "properties": {
      "models": {
         "description": "The list of models used by the rails configuration.",
         "items": {
            "$ref": "#/$defs/Model"
         },
         "title": "Models",
         "type": "array"
      },
      "user_messages": {
         "additionalProperties": {
            "items": {
               "type": "string"
            },
            "type": "array"
         },
         "description": "The list of user messages that should be used for the rails.",
         "title": "User Messages",
         "type": "object"
      },
      "bot_messages": {
         "additionalProperties": {
            "items": {
               "type": "string"
            },
            "type": "array"
         },
         "description": "The list of bot messages that should be used for the rails.",
         "title": "Bot Messages",
         "type": "object"
      },
      "flows": {
         "description": "The list of flows that should be used for the rails.",
         "items": {
            "anyOf": [
               {
                  "type": "object"
               },
               {}
            ]
         },
         "title": "Flows",
         "type": "array"
      },
      "instructions": {
         "anyOf": [
            {
               "items": {
                  "$ref": "#/$defs/Instruction"
               },
               "type": "array"
            },
            {
               "type": "null"
            }
         ],
         "default": [
            {
               "type": "general",
               "content": "Below is a conversation between a helpful AI assistant and a user. The bot is designed to generate human-like text based on the input that it receives. The bot is talkative and provides lots of specific details. If the bot does not know the answer to a question, it truthfully says it does not know."
            }
         ],
         "description": "List of instructions in natural language that the LLM should use.",
         "title": "Instructions"
      },
      "docs": {
         "anyOf": [
            {
               "items": {
                  "$ref": "#/$defs/Document"
               },
               "type": "array"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "List of documents that should be used for question answering.",
         "title": "Docs"
      },
      "actions_server_url": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The URL of the actions server that should be used for the rails.",
         "title": "Actions Server Url"
      },
      "sample_conversation": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": "user \"Hello there!\"\n  express greeting\nbot express greeting\n  \"Hello! How can I assist you today?\"\nuser \"What can you do for me?\"\n  ask about capabilities\nbot respond about capabilities\n  \"As an AI assistant, I can help you with a wide range of tasks. This includes question answering on various topics, generating text for various purposes and providing suggestions based on your preferences.\"\nuser \"Tell me a bit about the history of NVIDIA.\"\n  ask general question\nbot response for general question\n  \"NVIDIA is a technology company that specializes in designing and manufacturing graphics processing units (GPUs) and other computer hardware. The company was founded in 1993 by Jen-Hsun Huang, Chris Malachowsky, and Curtis Priem.\"\nuser \"tell me more\"\n  request more information\nbot provide more information\n  \"Initially, the company focused on developing 3D graphics processing technology for the PC gaming market. In 1999, NVIDIA released the GeForce 256, the world's first GPU, which was a major breakthrough for the gaming industry. The company continued to innovate in the GPU space, releasing new products and expanding into other markets such as professional graphics, mobile devices, and artificial intelligence.\"\nuser \"thanks\"\n  express appreciation\nbot express appreciation and offer additional help\n  \"You're welcome. If you have any more questions or if there's anything else I can help you with, please don't hesitate to ask.\"\n",
         "description": "The sample conversation that should be used inside the prompts.",
         "title": "Sample Conversation"
      },
      "prompts": {
         "anyOf": [
            {
               "items": {
                  "$ref": "#/$defs/TaskPrompt"
               },
               "type": "array"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The prompts that should be used for the various LLM tasks.",
         "title": "Prompts"
      },
      "prompting_mode": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": "standard",
         "description": "Allows choosing between different prompting strategies.",
         "title": "Prompting Mode"
      },
      "config_path": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The path from which the configuration was loaded.",
         "title": "Config Path"
      },
      "import_paths": {
         "anyOf": [
            {
               "items": {
                  "type": "string"
               },
               "type": "array"
            },
            {
               "type": "null"
            }
         ],
         "description": "A list of additional paths from which configuration elements (colang flows, .yml files, actions) should be loaded.",
         "title": "Import Paths"
      },
      "imported_paths": {
         "anyOf": [
            {
               "additionalProperties": {
                  "type": "string"
               },
               "type": "object"
            },
            {
               "type": "null"
            }
         ],
         "description": "The mapping between the imported paths and the actual full path to which they were resolved.",
         "title": "Imported Paths"
      },
      "lowest_temperature": {
         "anyOf": [
            {
               "type": "number"
            },
            {
               "type": "null"
            }
         ],
         "default": 0.001,
         "description": "The lowest temperature that should be used for the LLM.",
         "title": "Lowest Temperature"
      },
      "enable_multi_step_generation": {
         "anyOf": [
            {
               "type": "boolean"
            },
            {
               "type": "null"
            }
         ],
         "default": false,
         "description": "Whether to enable multi-step generation for the LLM.",
         "title": "Enable Multi Step Generation"
      },
      "colang_version": {
         "default": "1.0",
         "description": "The Colang version to use.",
         "title": "Colang Version",
         "type": "string"
      },
      "custom_data": {
         "description": "Any custom configuration data that might be needed.",
         "title": "Custom Data",
         "type": "object"
      },
      "knowledge_base": {
         "$ref": "#/$defs/KnowledgeBaseConfig",
         "description": "Configuration for the built-in knowledge base support."
      },
      "core": {
         "$ref": "#/$defs/CoreConfig",
         "description": "Configuration for core internal mechanics."
      },
      "rails": {
         "$ref": "#/$defs/Rails",
         "description": "Configuration for the various rails (input, output, etc.)."
      },
      "streaming": {
         "default": false,
         "description": "Whether this configuration should use streaming mode or not.",
         "title": "Streaming",
         "type": "boolean"
      },
      "enable_rails_exceptions": {
         "default": false,
         "description": "If set, the pre-defined guardrails raise exceptions instead of returning pre-defined messages.",
         "title": "Enable Rails Exceptions",
         "type": "boolean"
      },
      "passthrough": {
         "anyOf": [
            {
               "type": "boolean"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "Weather the original prompt should pass through the guardrails configuration as is. This means it will not be altered in any way. ",
         "title": "Passthrough"
      },
      "event_source_uid": {
         "default": "NeMoGuardrails-Colang-2.x",
         "description": "The source ID of events sent by the Colang Runtime. Useful to identify the component that has sent an event.",
         "title": "Event Source Uid",
         "type": "string"
      },
      "tracing": {
         "$ref": "#/$defs/TracingConfig",
         "description": "Configuration for tracing."
      },
      "raw_llm_call_action": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": "raw llm call",
         "description": "The name of the action that would execute the original raw LLM call. ",
         "title": "Raw Llm Call Action"
      }
   },
   "$defs": {
      "ActionRails": {
         "description": "Configuration of action rails.\n\nAction rails control various options related to the execution of actions.\nCurrently, only\n\nIn the future multiple options will be added, e.g., what input validation should be\nperformed per action, output validation, throttling, disabling, etc.",
         "properties": {
            "instant_actions": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The names of all actions which should finish instantly.",
               "title": "Instant Actions"
            }
         },
         "title": "ActionRails",
         "type": "object"
      },
      "AutoAlignOptions": {
         "description": "List of guardrails that are activated",
         "properties": {
            "guardrails_config": {
               "description": "The guardrails configuration that is passed to the AutoAlign endpoint",
               "title": "Guardrails Config",
               "type": "object"
            }
         },
         "title": "AutoAlignOptions",
         "type": "object"
      },
      "AutoAlignRailConfig": {
         "description": "Configuration data for the AutoAlign API",
         "properties": {
            "parameters": {
               "title": "Parameters",
               "type": "object"
            },
            "input": {
               "$ref": "#/$defs/AutoAlignOptions",
               "description": "Input configuration for AutoAlign guardrails"
            },
            "output": {
               "$ref": "#/$defs/AutoAlignOptions",
               "description": "Output configuration for AutoAlign guardrails"
            }
         },
         "title": "AutoAlignRailConfig",
         "type": "object"
      },
      "CoreConfig": {
         "description": "Settings for core internal mechanics.",
         "properties": {
            "embedding_search_provider": {
               "$ref": "#/$defs/EmbeddingSearchProvider",
               "description": "The search provider used to search the most similar canonical forms/flows."
            }
         },
         "title": "CoreConfig",
         "type": "object"
      },
      "DialogRails": {
         "description": "Configuration of topical rails.",
         "properties": {
            "single_call": {
               "$ref": "#/$defs/SingleCallConfig",
               "description": "Configuration for the single LLM call option."
            },
            "user_messages": {
               "$ref": "#/$defs/UserMessagesConfig"
            }
         },
         "title": "DialogRails",
         "type": "object"
      },
      "Document": {
         "description": "Configuration for documents that should be used for question answering.",
         "properties": {
            "format": {
               "title": "Format",
               "type": "string"
            },
            "content": {
               "title": "Content",
               "type": "string"
            }
         },
         "required": [
            "format",
            "content"
         ],
         "title": "Document",
         "type": "object"
      },
      "EmbeddingSearchProvider": {
         "description": "Configuration of a embedding search provider.",
         "properties": {
            "name": {
               "default": "default",
               "description": "The name of the embedding search provider. If not specified, default is used.",
               "title": "Name",
               "type": "string"
            },
            "parameters": {
               "title": "Parameters",
               "type": "object"
            },
            "cache": {
               "$ref": "#/$defs/EmbeddingsCacheConfig"
            }
         },
         "title": "EmbeddingSearchProvider",
         "type": "object"
      },
      "EmbeddingsCacheConfig": {
         "description": "Configuration for the caching embeddings.",
         "properties": {
            "enabled": {
               "default": false,
               "description": "Whether caching of the embeddings should be enabled or not.",
               "title": "Enabled",
               "type": "boolean"
            },
            "key_generator": {
               "default": "md5",
               "description": "The method to use for generating the cache keys.",
               "title": "Key Generator",
               "type": "string"
            },
            "store": {
               "default": "filesystem",
               "description": "What type of store to use for the cached embeddings.",
               "title": "Store",
               "type": "string"
            },
            "store_config": {
               "description": "Any additional configuration options required for the store. For example, path for `filesystem` or `host`/`port`/`db` for redis.",
               "title": "Store Config",
               "type": "object"
            }
         },
         "title": "EmbeddingsCacheConfig",
         "type": "object"
      },
      "FactCheckingRailConfig": {
         "description": "Configuration data for the fact-checking rail.",
         "properties": {
            "parameters": {
               "title": "Parameters",
               "type": "object"
            },
            "fallback_to_self_check": {
               "default": false,
               "description": "Whether to fall back to self-check if another method fail.",
               "title": "Fallback To Self Check",
               "type": "boolean"
            }
         },
         "title": "FactCheckingRailConfig",
         "type": "object"
      },
      "InputRails": {
         "description": "Configuration of input rails.",
         "properties": {
            "flows": {
               "description": "The names of all the flows that implement input rails.",
               "items": {
                  "type": "string"
               },
               "title": "Flows",
               "type": "array"
            }
         },
         "title": "InputRails",
         "type": "object"
      },
      "Instruction": {
         "description": "Configuration for instructions in natural language that should be passed to the LLM.",
         "properties": {
            "type": {
               "title": "Type",
               "type": "string"
            },
            "content": {
               "title": "Content",
               "type": "string"
            }
         },
         "required": [
            "type",
            "content"
         ],
         "title": "Instruction",
         "type": "object"
      },
      "JailbreakDetectionConfig": {
         "description": "Configuration data for jailbreak detection.",
         "properties": {
            "server_endpoint": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The endpoint for the jailbreak detection heuristics server.",
               "title": "Server Endpoint"
            },
            "length_per_perplexity_threshold": {
               "default": 89.79,
               "description": "The length/perplexity threshold.",
               "title": "Length Per Perplexity Threshold",
               "type": "number"
            },
            "prefix_suffix_perplexity_threshold": {
               "default": 1845.65,
               "description": "The prefix/suffix perplexity threshold.",
               "title": "Prefix Suffix Perplexity Threshold",
               "type": "number"
            },
            "embedding": {
               "default": "nvidia/nv-embedqa-e5-v5",
               "description": "Model to use for embedding-based detections.",
               "title": "Embedding",
               "type": "string"
            }
         },
         "title": "JailbreakDetectionConfig",
         "type": "object"
      },
      "KnowledgeBaseConfig": {
         "properties": {
            "folder": {
               "default": "kb",
               "description": "The folder from which the documents should be loaded.",
               "title": "Folder",
               "type": "string"
            },
            "embedding_search_provider": {
               "$ref": "#/$defs/EmbeddingSearchProvider",
               "description": "The search provider used to search the knowledge base."
            }
         },
         "title": "KnowledgeBaseConfig",
         "type": "object"
      },
      "LogAdapterConfig": {
         "additionalProperties": true,
         "properties": {
            "name": {
               "default": "FileSystem",
               "description": "The name of the adapter.",
               "title": "Name",
               "type": "string"
            }
         },
         "title": "LogAdapterConfig",
         "type": "object"
      },
      "MessageTemplate": {
         "description": "Template for a message structure.",
         "properties": {
            "type": {
               "description": "The type of message, e.g., 'assistant', 'user', 'system'.",
               "title": "Type",
               "type": "string"
            },
            "content": {
               "description": "The content of the message.",
               "title": "Content",
               "type": "string"
            }
         },
         "required": [
            "type",
            "content"
         ],
         "title": "MessageTemplate",
         "type": "object"
      },
      "Model": {
         "description": "Configuration of a model used by the rails engine.\n\nTypically, the main model is configured e.g.:\n{\n    \"type\": \"main\",\n    \"engine\": \"openai\",\n    \"model\": \"gpt-3.5-turbo-instruct\"\n}",
         "properties": {
            "type": {
               "title": "Type",
               "type": "string"
            },
            "engine": {
               "title": "Engine",
               "type": "string"
            },
            "model": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the model. If not specified, it should be specified through the parameters attribute.",
               "title": "Model"
            },
            "parameters": {
               "title": "Parameters",
               "type": "object"
            }
         },
         "required": [
            "type",
            "engine"
         ],
         "title": "Model",
         "type": "object"
      },
      "OutputRails": {
         "description": "Configuration of output rails.",
         "properties": {
            "flows": {
               "description": "The names of all the flows that implement output rails.",
               "items": {
                  "type": "string"
               },
               "title": "Flows",
               "type": "array"
            }
         },
         "title": "OutputRails",
         "type": "object"
      },
      "PatronusEvaluateApiParams": {
         "description": "Config to parameterize the Patronus Evaluate API call",
         "properties": {
            "success_strategy": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusEvaluationSuccessStrategy"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "all_pass",
               "description": "Strategy to determine whether the Patronus Evaluate API Guardrail passes or not."
            },
            "params": {
               "description": "Parameters to the Patronus Evaluate API",
               "title": "Params",
               "type": "object"
            }
         },
         "title": "PatronusEvaluateApiParams",
         "type": "object"
      },
      "PatronusEvaluateConfig": {
         "description": "Config for the Patronus Evaluate API call",
         "properties": {
            "evaluate_config": {
               "$ref": "#/$defs/PatronusEvaluateApiParams",
               "description": "Configuration passed to the Patronus Evaluate API"
            }
         },
         "title": "PatronusEvaluateConfig",
         "type": "object"
      },
      "PatronusEvaluationSuccessStrategy": {
         "description": "Strategy for determining whether a Patronus Evaluation API\nrequest should pass, especially when multiple evaluators\nare called in a single request.\nALL_PASS requires all evaluators to pass for success.\nANY_PASS requires only one evaluator to pass for success.",
         "enum": [
            "all_pass",
            "any_pass"
         ],
         "title": "PatronusEvaluationSuccessStrategy",
         "type": "string"
      },
      "PatronusRailConfig": {
         "description": "Configuration data for the Patronus Evaluate API",
         "properties": {
            "input": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusEvaluateConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Patronus Evaluate API configuration for an Input Guardrail"
            },
            "output": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusEvaluateConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Patronus Evaluate API configuration for an Output Guardrail"
            }
         },
         "title": "PatronusRailConfig",
         "type": "object"
      },
      "PrivateAIDetection": {
         "description": "Configuration for Private AI.",
         "properties": {
            "server_endpoint": {
               "default": "http://localhost:8080/process/text",
               "description": "The endpoint for the private AI detection server.",
               "title": "Server Endpoint",
               "type": "string"
            },
            "input": {
               "$ref": "#/$defs/PrivateAIDetectionOptions",
               "description": "Configuration of the entities to be detected on the user input."
            },
            "output": {
               "$ref": "#/$defs/PrivateAIDetectionOptions",
               "description": "Configuration of the entities to be detected on the bot output."
            },
            "retrieval": {
               "$ref": "#/$defs/PrivateAIDetectionOptions",
               "description": "Configuration of the entities to be detected on retrieved relevant chunks."
            }
         },
         "title": "PrivateAIDetection",
         "type": "object"
      },
      "PrivateAIDetectionOptions": {
         "description": "Configuration options for Private AI.",
         "properties": {
            "entities": {
               "description": "The list of entities that should be detected.",
               "items": {
                  "type": "string"
               },
               "title": "Entities",
               "type": "array"
            }
         },
         "title": "PrivateAIDetectionOptions",
         "type": "object"
      },
      "Rails": {
         "description": "Configuration of specific rails.",
         "properties": {
            "config": {
               "$ref": "#/$defs/RailsConfigData"
            },
            "input": {
               "$ref": "#/$defs/InputRails",
               "description": "Configuration of the input rails."
            },
            "output": {
               "$ref": "#/$defs/OutputRails",
               "description": "Configuration of the output rails."
            },
            "retrieval": {
               "$ref": "#/$defs/RetrievalRails",
               "description": "Configuration of the retrieval rails."
            },
            "dialog": {
               "$ref": "#/$defs/DialogRails",
               "description": "Configuration of the dialog rails."
            },
            "actions": {
               "$ref": "#/$defs/ActionRails",
               "description": "Configuration of action rails."
            }
         },
         "title": "Rails",
         "type": "object"
      },
      "RailsConfigData": {
         "description": "Configuration data for specific rails that are supported out-of-the-box.",
         "properties": {
            "fact_checking": {
               "$ref": "#/$defs/FactCheckingRailConfig"
            },
            "autoalign": {
               "$ref": "#/$defs/AutoAlignRailConfig",
               "description": "Configuration data for the AutoAlign guardrails API."
            },
            "patronus": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusRailConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Configuration data for the Patronus Evaluate API."
            },
            "sensitive_data_detection": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/SensitiveDataDetection"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Configuration for detecting sensitive data."
            },
            "jailbreak_detection": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/JailbreakDetectionConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Configuration for jailbreak detection."
            },
            "privateai": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PrivateAIDetection"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Configuration for Private AI."
            }
         },
         "title": "RailsConfigData",
         "type": "object"
      },
      "RetrievalRails": {
         "description": "Configuration of retrieval rails.",
         "properties": {
            "flows": {
               "description": "The names of all the flows that implement retrieval rails.",
               "items": {
                  "type": "string"
               },
               "title": "Flows",
               "type": "array"
            }
         },
         "title": "RetrievalRails",
         "type": "object"
      },
      "SensitiveDataDetection": {
         "description": "Configuration of what sensitive data should be detected.",
         "properties": {
            "recognizers": {
               "description": "Additional custom recognizers. Check out https://microsoft.github.io/presidio/tutorial/08_no_code/ for more details.",
               "items": {
                  "type": "object"
               },
               "title": "Recognizers",
               "type": "array"
            },
            "input": {
               "$ref": "#/$defs/SensitiveDataDetectionOptions",
               "description": "Configuration of the entities to be detected on the user input."
            },
            "output": {
               "$ref": "#/$defs/SensitiveDataDetectionOptions",
               "description": "Configuration of the entities to be detected on the bot output."
            },
            "retrieval": {
               "$ref": "#/$defs/SensitiveDataDetectionOptions",
               "description": "Configuration of the entities to be detected on retrieved relevant chunks."
            }
         },
         "title": "SensitiveDataDetection",
         "type": "object"
      },
      "SensitiveDataDetectionOptions": {
         "properties": {
            "entities": {
               "description": "The list of entities that should be detected. Check out https://microsoft.github.io/presidio/supported_entities/ forthe list of supported entities.",
               "items": {
                  "type": "string"
               },
               "title": "Entities",
               "type": "array"
            },
            "mask_token": {
               "default": "*",
               "description": "The token that should be used to mask the sensitive data.",
               "title": "Mask Token",
               "type": "string"
            },
            "score_threshold": {
               "default": 0.2,
               "description": "The score threshold that should be used to detect the sensitive data.",
               "title": "Score Threshold",
               "type": "number"
            }
         },
         "title": "SensitiveDataDetectionOptions",
         "type": "object"
      },
      "SingleCallConfig": {
         "description": "Configuration for the single LLM call option for topical rails.",
         "properties": {
            "enabled": {
               "default": false,
               "title": "Enabled",
               "type": "boolean"
            },
            "fallback_to_multiple_calls": {
               "default": true,
               "description": "Whether to fall back to multiple calls if a single call is not possible.",
               "title": "Fallback To Multiple Calls",
               "type": "boolean"
            }
         },
         "title": "SingleCallConfig",
         "type": "object"
      },
      "TaskPrompt": {
         "description": "Configuration for prompts that will be used for a specific task.",
         "properties": {
            "task": {
               "description": "The id of the task associated with this prompt.",
               "title": "Task",
               "type": "string"
            },
            "content": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The content of the prompt, if it's a string.",
               "title": "Content"
            },
            "messages": {
               "anyOf": [
                  {
                     "items": {
                        "anyOf": [
                           {
                              "$ref": "#/$defs/MessageTemplate"
                           },
                           {
                              "type": "string"
                           }
                        ]
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The list of messages included in the prompt. Used for chat models.",
               "title": "Messages"
            },
            "models": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "If specified, the prompt will be used only for the given LLM engines/models. The format is a list of strings with the format: <engine> or <engine>/<model>.",
               "title": "Models"
            },
            "output_parser": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The name of the output parser to use for this prompt.",
               "title": "Output Parser"
            },
            "max_length": {
               "anyOf": [
                  {
                     "type": "integer"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": 16000,
               "description": "The maximum length of the prompt in number of characters.",
               "title": "Max Length"
            },
            "mode": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "standard",
               "description": "Corresponds to the `prompting_mode` for which this prompt is fetched. Default is 'standard'.",
               "title": "Mode"
            },
            "stop": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "If specified, will be configure stop tokens for models that support this.",
               "title": "Stop"
            },
            "max_tokens": {
               "anyOf": [
                  {
                     "type": "integer"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The maximum number of tokens that can be generated in the chat completion.",
               "title": "Max Tokens"
            }
         },
         "required": [
            "task"
         ],
         "title": "TaskPrompt",
         "type": "object"
      },
      "TracingConfig": {
         "properties": {
            "enabled": {
               "default": false,
               "title": "Enabled",
               "type": "boolean"
            },
            "adapters": {
               "description": "The list of tracing adapters to use. If not specified, the default adapters are used.",
               "items": {
                  "$ref": "#/$defs/LogAdapterConfig"
               },
               "title": "Adapters",
               "type": "array"
            }
         },
         "title": "TracingConfig",
         "type": "object"
      },
      "UserMessagesConfig": {
         "description": "Configuration for how the user messages are interpreted.",
         "properties": {
            "embeddings_only": {
               "default": false,
               "description": "Whether to use only embeddings for computing the user canonical form messages.",
               "title": "Embeddings Only",
               "type": "boolean"
            },
            "embeddings_only_similarity_threshold": {
               "anyOf": [
                  {
                     "maximum": 1.0,
                     "minimum": 0.0,
                     "type": "number"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The similarity threshold to use when using only embeddings for computing the user canonical form messages.",
               "title": "Embeddings Only Similarity Threshold"
            },
            "embeddings_only_fallback_intent": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Defines the fallback intent when the similarity is below the threshold. If set to None, the user intent is computed normally using the LLM. If set to a string value, that string is used as the intent.",
               "title": "Embeddings Only Fallback Intent"
            }
         },
         "title": "UserMessagesConfig",
         "type": "object"
      }
   },
   "required": [
      "models"
   ]
}

Fields:
field actions_server_url: str | None = None#

The URL of the actions server that should be used for the rails.

field bot_messages: Dict[str, List[str]] [Optional]#

The list of bot messages that should be used for the rails.

field colang_version: str = '1.0'#

The Colang version to use.

field config_path: str | None = None#

The path from which the configuration was loaded.

field core: CoreConfig [Optional]#

Configuration for core internal mechanics.

field custom_data: Dict [Optional]#

Any custom configuration data that might be needed.

field docs: List[Document] | None = None#

List of documents that should be used for question answering.

field enable_multi_step_generation: bool | None = False#

Whether to enable multi-step generation for the LLM.

field enable_rails_exceptions: bool = False#

If set, the pre-defined guardrails raise exceptions instead of returning pre-defined messages.

field event_source_uid: str = 'NeMoGuardrails-Colang-2.x'#

The source ID of events sent by the Colang Runtime. Useful to identify the component that has sent an event.

field flows: List[Dict | Any] [Optional]#

The list of flows that should be used for the rails.

field import_paths: List[str] | None [Optional]#

A list of additional paths from which configuration elements (colang flows, .yml files, actions) should be loaded.

field imported_paths: Dict[str, str] | None [Optional]#

The mapping between the imported paths and the actual full path to which they were resolved.

field instructions: List[Instruction] | None = [Instruction(type='general', content='Below is a conversation between a helpful AI assistant and a user. The bot is designed to generate human-like text based on the input that it receives. The bot is talkative and provides lots of specific details. If the bot does not know the answer to a question, it truthfully says it does not know.')]#

List of instructions in natural language that the LLM should use.

field knowledge_base: KnowledgeBaseConfig [Optional]#

Configuration for the built-in knowledge base support.

field lowest_temperature: float | None = 0.001#

The lowest temperature that should be used for the LLM.

field models: List[Model] [Required]#

The list of models used by the rails configuration.

field passthrough: bool | None = None#

Weather the original prompt should pass through the guardrails configuration as is. This means it will not be altered in any way.

field prompting_mode: str | None = 'standard'#

Allows choosing between different prompting strategies.

field prompts: List[TaskPrompt] | None = None#

The prompts that should be used for the various LLM tasks.

field rails: Rails [Optional]#

Configuration for the various rails (input, output, etc.).

field raw_llm_call_action: str | None = 'raw llm call'#

The name of the action that would execute the original raw LLM call.

field sample_conversation: str | None = 'user "Hello there!"\n  express greeting\nbot express greeting\n  "Hello! How can I assist you today?"\nuser "What can you do for me?"\n  ask about capabilities\nbot respond about capabilities\n  "As an AI assistant, I can help you with a wide range of tasks. This includes question answering on various topics, generating text for various purposes and providing suggestions based on your preferences."\nuser "Tell me a bit about the history of NVIDIA."\n  ask general question\nbot response for general question\n  "NVIDIA is a technology company that specializes in designing and manufacturing graphics processing units (GPUs) and other computer hardware. The company was founded in 1993 by Jen-Hsun Huang, Chris Malachowsky, and Curtis Priem."\nuser "tell me more"\n  request more information\nbot provide more information\n  "Initially, the company focused on developing 3D graphics processing technology for the PC gaming market. In 1999, NVIDIA released the GeForce 256, the world\'s first GPU, which was a major breakthrough for the gaming industry. The company continued to innovate in the GPU space, releasing new products and expanding into other markets such as professional graphics, mobile devices, and artificial intelligence."\nuser "thanks"\n  express appreciation\nbot express appreciation and offer additional help\n  "You\'re welcome. If you have any more questions or if there\'s anything else I can help you with, please don\'t hesitate to ask."\n'#

The sample conversation that should be used inside the prompts.

field streaming: bool = False#

Whether this configuration should use streaming mode or not.

field tracing: TracingConfig [Optional]#

Configuration for tracing.

field user_messages: Dict[str, List[str]] [Optional]#

The list of user messages that should be used for the rails.

classmethod check_output_parser_exists(values)#
classmethod check_prompt_exist_for_self_check_rails(values)#
classmethod fill_in_default_values_for_v2_x(values)#
classmethod from_content(
colang_content: str | None = None,
yaml_content: str | None = None,
config: dict | None = None,
)#

Loads a configuration from the provided colang/YAML content/config dict.

classmethod from_path(config_path: str)#

Loads a configuration from a given path.

Supports loading a from a single file, or from a directory.

classmethod parse_object(obj)#

Parses a configuration object from a given dictionary.

property streaming_supported#

Whether the current config supports streaming or not.

Currently, we don’t support streaming if there are output rails.

pydantic model nemoguardrails.rails.llm.config.RailsConfigData#

Configuration data for specific rails that are supported out-of-the-box.

Show JSON schema
{
   "title": "RailsConfigData",
   "description": "Configuration data for specific rails that are supported out-of-the-box.",
   "type": "object",
   "properties": {
      "fact_checking": {
         "$ref": "#/$defs/FactCheckingRailConfig"
      },
      "autoalign": {
         "$ref": "#/$defs/AutoAlignRailConfig",
         "description": "Configuration data for the AutoAlign guardrails API."
      },
      "patronus": {
         "anyOf": [
            {
               "$ref": "#/$defs/PatronusRailConfig"
            },
            {
               "type": "null"
            }
         ],
         "description": "Configuration data for the Patronus Evaluate API."
      },
      "sensitive_data_detection": {
         "anyOf": [
            {
               "$ref": "#/$defs/SensitiveDataDetection"
            },
            {
               "type": "null"
            }
         ],
         "description": "Configuration for detecting sensitive data."
      },
      "jailbreak_detection": {
         "anyOf": [
            {
               "$ref": "#/$defs/JailbreakDetectionConfig"
            },
            {
               "type": "null"
            }
         ],
         "description": "Configuration for jailbreak detection."
      },
      "privateai": {
         "anyOf": [
            {
               "$ref": "#/$defs/PrivateAIDetection"
            },
            {
               "type": "null"
            }
         ],
         "description": "Configuration for Private AI."
      }
   },
   "$defs": {
      "AutoAlignOptions": {
         "description": "List of guardrails that are activated",
         "properties": {
            "guardrails_config": {
               "description": "The guardrails configuration that is passed to the AutoAlign endpoint",
               "title": "Guardrails Config",
               "type": "object"
            }
         },
         "title": "AutoAlignOptions",
         "type": "object"
      },
      "AutoAlignRailConfig": {
         "description": "Configuration data for the AutoAlign API",
         "properties": {
            "parameters": {
               "title": "Parameters",
               "type": "object"
            },
            "input": {
               "$ref": "#/$defs/AutoAlignOptions",
               "description": "Input configuration for AutoAlign guardrails"
            },
            "output": {
               "$ref": "#/$defs/AutoAlignOptions",
               "description": "Output configuration for AutoAlign guardrails"
            }
         },
         "title": "AutoAlignRailConfig",
         "type": "object"
      },
      "FactCheckingRailConfig": {
         "description": "Configuration data for the fact-checking rail.",
         "properties": {
            "parameters": {
               "title": "Parameters",
               "type": "object"
            },
            "fallback_to_self_check": {
               "default": false,
               "description": "Whether to fall back to self-check if another method fail.",
               "title": "Fallback To Self Check",
               "type": "boolean"
            }
         },
         "title": "FactCheckingRailConfig",
         "type": "object"
      },
      "JailbreakDetectionConfig": {
         "description": "Configuration data for jailbreak detection.",
         "properties": {
            "server_endpoint": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The endpoint for the jailbreak detection heuristics server.",
               "title": "Server Endpoint"
            },
            "length_per_perplexity_threshold": {
               "default": 89.79,
               "description": "The length/perplexity threshold.",
               "title": "Length Per Perplexity Threshold",
               "type": "number"
            },
            "prefix_suffix_perplexity_threshold": {
               "default": 1845.65,
               "description": "The prefix/suffix perplexity threshold.",
               "title": "Prefix Suffix Perplexity Threshold",
               "type": "number"
            },
            "embedding": {
               "default": "nvidia/nv-embedqa-e5-v5",
               "description": "Model to use for embedding-based detections.",
               "title": "Embedding",
               "type": "string"
            }
         },
         "title": "JailbreakDetectionConfig",
         "type": "object"
      },
      "PatronusEvaluateApiParams": {
         "description": "Config to parameterize the Patronus Evaluate API call",
         "properties": {
            "success_strategy": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusEvaluationSuccessStrategy"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": "all_pass",
               "description": "Strategy to determine whether the Patronus Evaluate API Guardrail passes or not."
            },
            "params": {
               "description": "Parameters to the Patronus Evaluate API",
               "title": "Params",
               "type": "object"
            }
         },
         "title": "PatronusEvaluateApiParams",
         "type": "object"
      },
      "PatronusEvaluateConfig": {
         "description": "Config for the Patronus Evaluate API call",
         "properties": {
            "evaluate_config": {
               "$ref": "#/$defs/PatronusEvaluateApiParams",
               "description": "Configuration passed to the Patronus Evaluate API"
            }
         },
         "title": "PatronusEvaluateConfig",
         "type": "object"
      },
      "PatronusEvaluationSuccessStrategy": {
         "description": "Strategy for determining whether a Patronus Evaluation API\nrequest should pass, especially when multiple evaluators\nare called in a single request.\nALL_PASS requires all evaluators to pass for success.\nANY_PASS requires only one evaluator to pass for success.",
         "enum": [
            "all_pass",
            "any_pass"
         ],
         "title": "PatronusEvaluationSuccessStrategy",
         "type": "string"
      },
      "PatronusRailConfig": {
         "description": "Configuration data for the Patronus Evaluate API",
         "properties": {
            "input": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusEvaluateConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Patronus Evaluate API configuration for an Input Guardrail"
            },
            "output": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/PatronusEvaluateConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "description": "Patronus Evaluate API configuration for an Output Guardrail"
            }
         },
         "title": "PatronusRailConfig",
         "type": "object"
      },
      "PrivateAIDetection": {
         "description": "Configuration for Private AI.",
         "properties": {
            "server_endpoint": {
               "default": "http://localhost:8080/process/text",
               "description": "The endpoint for the private AI detection server.",
               "title": "Server Endpoint",
               "type": "string"
            },
            "input": {
               "$ref": "#/$defs/PrivateAIDetectionOptions",
               "description": "Configuration of the entities to be detected on the user input."
            },
            "output": {
               "$ref": "#/$defs/PrivateAIDetectionOptions",
               "description": "Configuration of the entities to be detected on the bot output."
            },
            "retrieval": {
               "$ref": "#/$defs/PrivateAIDetectionOptions",
               "description": "Configuration of the entities to be detected on retrieved relevant chunks."
            }
         },
         "title": "PrivateAIDetection",
         "type": "object"
      },
      "PrivateAIDetectionOptions": {
         "description": "Configuration options for Private AI.",
         "properties": {
            "entities": {
               "description": "The list of entities that should be detected.",
               "items": {
                  "type": "string"
               },
               "title": "Entities",
               "type": "array"
            }
         },
         "title": "PrivateAIDetectionOptions",
         "type": "object"
      },
      "SensitiveDataDetection": {
         "description": "Configuration of what sensitive data should be detected.",
         "properties": {
            "recognizers": {
               "description": "Additional custom recognizers. Check out https://microsoft.github.io/presidio/tutorial/08_no_code/ for more details.",
               "items": {
                  "type": "object"
               },
               "title": "Recognizers",
               "type": "array"
            },
            "input": {
               "$ref": "#/$defs/SensitiveDataDetectionOptions",
               "description": "Configuration of the entities to be detected on the user input."
            },
            "output": {
               "$ref": "#/$defs/SensitiveDataDetectionOptions",
               "description": "Configuration of the entities to be detected on the bot output."
            },
            "retrieval": {
               "$ref": "#/$defs/SensitiveDataDetectionOptions",
               "description": "Configuration of the entities to be detected on retrieved relevant chunks."
            }
         },
         "title": "SensitiveDataDetection",
         "type": "object"
      },
      "SensitiveDataDetectionOptions": {
         "properties": {
            "entities": {
               "description": "The list of entities that should be detected. Check out https://microsoft.github.io/presidio/supported_entities/ forthe list of supported entities.",
               "items": {
                  "type": "string"
               },
               "title": "Entities",
               "type": "array"
            },
            "mask_token": {
               "default": "*",
               "description": "The token that should be used to mask the sensitive data.",
               "title": "Mask Token",
               "type": "string"
            },
            "score_threshold": {
               "default": 0.2,
               "description": "The score threshold that should be used to detect the sensitive data.",
               "title": "Score Threshold",
               "type": "number"
            }
         },
         "title": "SensitiveDataDetectionOptions",
         "type": "object"
      }
   }
}

Fields:
field autoalign: AutoAlignRailConfig [Optional]#

Configuration data for the AutoAlign guardrails API.

field fact_checking: FactCheckingRailConfig [Optional]#

Configuration data for the fact-checking rail.

field jailbreak_detection: JailbreakDetectionConfig | None [Optional]#

Configuration for jailbreak detection.

field patronus: PatronusRailConfig | None [Optional]#

Configuration data for the Patronus Evaluate API.

field privateai: PrivateAIDetection | None [Optional]#

Configuration for Private AI.

field sensitive_data_detection: SensitiveDataDetection | None [Optional]#

Configuration for detecting sensitive data.

pydantic model nemoguardrails.rails.llm.config.RetrievalRails#

Configuration of retrieval rails.

Show JSON schema
{
   "title": "RetrievalRails",
   "description": "Configuration of retrieval rails.",
   "type": "object",
   "properties": {
      "flows": {
         "description": "The names of all the flows that implement retrieval rails.",
         "items": {
            "type": "string"
         },
         "title": "Flows",
         "type": "array"
      }
   }
}

Fields:
field flows: List[str] [Optional]#

The names of all the flows that implement retrieval rails.

pydantic model nemoguardrails.rails.llm.config.SensitiveDataDetection#

Configuration of what sensitive data should be detected.

Show JSON schema
{
   "title": "SensitiveDataDetection",
   "description": "Configuration of what sensitive data should be detected.",
   "type": "object",
   "properties": {
      "recognizers": {
         "description": "Additional custom recognizers. Check out https://microsoft.github.io/presidio/tutorial/08_no_code/ for more details.",
         "items": {
            "type": "object"
         },
         "title": "Recognizers",
         "type": "array"
      },
      "input": {
         "$ref": "#/$defs/SensitiveDataDetectionOptions",
         "description": "Configuration of the entities to be detected on the user input."
      },
      "output": {
         "$ref": "#/$defs/SensitiveDataDetectionOptions",
         "description": "Configuration of the entities to be detected on the bot output."
      },
      "retrieval": {
         "$ref": "#/$defs/SensitiveDataDetectionOptions",
         "description": "Configuration of the entities to be detected on retrieved relevant chunks."
      }
   },
   "$defs": {
      "SensitiveDataDetectionOptions": {
         "properties": {
            "entities": {
               "description": "The list of entities that should be detected. Check out https://microsoft.github.io/presidio/supported_entities/ forthe list of supported entities.",
               "items": {
                  "type": "string"
               },
               "title": "Entities",
               "type": "array"
            },
            "mask_token": {
               "default": "*",
               "description": "The token that should be used to mask the sensitive data.",
               "title": "Mask Token",
               "type": "string"
            },
            "score_threshold": {
               "default": 0.2,
               "description": "The score threshold that should be used to detect the sensitive data.",
               "title": "Score Threshold",
               "type": "number"
            }
         },
         "title": "SensitiveDataDetectionOptions",
         "type": "object"
      }
   }
}

Fields:
field input: SensitiveDataDetectionOptions [Optional]#

Configuration of the entities to be detected on the user input.

field output: SensitiveDataDetectionOptions [Optional]#

Configuration of the entities to be detected on the bot output.

field recognizers: List[dict] [Optional]#

Additional custom recognizers. Check out https://microsoft.github.io/presidio/tutorial/08_no_code/ for more details.

field retrieval: SensitiveDataDetectionOptions [Optional]#

Configuration of the entities to be detected on retrieved relevant chunks.

pydantic model nemoguardrails.rails.llm.config.SensitiveDataDetectionOptions#

Show JSON schema
{
   "title": "SensitiveDataDetectionOptions",
   "type": "object",
   "properties": {
      "entities": {
         "description": "The list of entities that should be detected. Check out https://microsoft.github.io/presidio/supported_entities/ forthe list of supported entities.",
         "items": {
            "type": "string"
         },
         "title": "Entities",
         "type": "array"
      },
      "mask_token": {
         "default": "*",
         "description": "The token that should be used to mask the sensitive data.",
         "title": "Mask Token",
         "type": "string"
      },
      "score_threshold": {
         "default": 0.2,
         "description": "The score threshold that should be used to detect the sensitive data.",
         "title": "Score Threshold",
         "type": "number"
      }
   }
}

Fields:
field entities: List[str] [Optional]#

The list of entities that should be detected. Check out https://microsoft.github.io/presidio/supported_entities/ forthe list of supported entities.

field mask_token: str = '*'#

The token that should be used to mask the sensitive data.

field score_threshold: float = 0.2#

The score threshold that should be used to detect the sensitive data.

pydantic model nemoguardrails.rails.llm.config.SingleCallConfig#

Configuration for the single LLM call option for topical rails.

Show JSON schema
{
   "title": "SingleCallConfig",
   "description": "Configuration for the single LLM call option for topical rails.",
   "type": "object",
   "properties": {
      "enabled": {
         "default": false,
         "title": "Enabled",
         "type": "boolean"
      },
      "fallback_to_multiple_calls": {
         "default": true,
         "description": "Whether to fall back to multiple calls if a single call is not possible.",
         "title": "Fallback To Multiple Calls",
         "type": "boolean"
      }
   }
}

Fields:
field enabled: bool = False#
field fallback_to_multiple_calls: bool = True#

Whether to fall back to multiple calls if a single call is not possible.

pydantic model nemoguardrails.rails.llm.config.TaskPrompt#

Configuration for prompts that will be used for a specific task.

Show JSON schema
{
   "title": "TaskPrompt",
   "description": "Configuration for prompts that will be used for a specific task.",
   "type": "object",
   "properties": {
      "task": {
         "description": "The id of the task associated with this prompt.",
         "title": "Task",
         "type": "string"
      },
      "content": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The content of the prompt, if it's a string.",
         "title": "Content"
      },
      "messages": {
         "anyOf": [
            {
               "items": {
                  "anyOf": [
                     {
                        "$ref": "#/$defs/MessageTemplate"
                     },
                     {
                        "type": "string"
                     }
                  ]
               },
               "type": "array"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The list of messages included in the prompt. Used for chat models.",
         "title": "Messages"
      },
      "models": {
         "anyOf": [
            {
               "items": {
                  "type": "string"
               },
               "type": "array"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "If specified, the prompt will be used only for the given LLM engines/models. The format is a list of strings with the format: <engine> or <engine>/<model>.",
         "title": "Models"
      },
      "output_parser": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The name of the output parser to use for this prompt.",
         "title": "Output Parser"
      },
      "max_length": {
         "anyOf": [
            {
               "type": "integer"
            },
            {
               "type": "null"
            }
         ],
         "default": 16000,
         "description": "The maximum length of the prompt in number of characters.",
         "title": "Max Length"
      },
      "mode": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": "standard",
         "description": "Corresponds to the `prompting_mode` for which this prompt is fetched. Default is 'standard'.",
         "title": "Mode"
      },
      "stop": {
         "anyOf": [
            {
               "items": {
                  "type": "string"
               },
               "type": "array"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "If specified, will be configure stop tokens for models that support this.",
         "title": "Stop"
      },
      "max_tokens": {
         "anyOf": [
            {
               "type": "integer"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The maximum number of tokens that can be generated in the chat completion.",
         "title": "Max Tokens"
      }
   },
   "$defs": {
      "MessageTemplate": {
         "description": "Template for a message structure.",
         "properties": {
            "type": {
               "description": "The type of message, e.g., 'assistant', 'user', 'system'.",
               "title": "Type",
               "type": "string"
            },
            "content": {
               "description": "The content of the message.",
               "title": "Content",
               "type": "string"
            }
         },
         "required": [
            "type",
            "content"
         ],
         "title": "MessageTemplate",
         "type": "object"
      }
   },
   "required": [
      "task"
   ]
}

Fields:
field content: str | None = None#

The content of the prompt, if it’s a string.

field max_length: int | None = 16000#

The maximum length of the prompt in number of characters.

field max_tokens: int | None = None#

The maximum number of tokens that can be generated in the chat completion.

field messages: List[MessageTemplate | str] | None = None#

The list of messages included in the prompt. Used for chat models.

field mode: str | None = 'standard'#

Corresponds to the prompting_mode for which this prompt is fetched. Default is ‘standard’.

field models: List[str] | None = None#

If specified, the prompt will be used only for the given LLM engines/models. The format is a list of strings with the format: <engine> or <engine>/<model>.

field output_parser: str | None = None#

The name of the output parser to use for this prompt.

field stop: List[str] | None = None#

If specified, will be configure stop tokens for models that support this.

field task: str [Required]#

The id of the task associated with this prompt.

classmethod check_fields(values)#
pydantic model nemoguardrails.rails.llm.config.TracingConfig#

Show JSON schema
{
   "title": "TracingConfig",
   "type": "object",
   "properties": {
      "enabled": {
         "default": false,
         "title": "Enabled",
         "type": "boolean"
      },
      "adapters": {
         "description": "The list of tracing adapters to use. If not specified, the default adapters are used.",
         "items": {
            "$ref": "#/$defs/LogAdapterConfig"
         },
         "title": "Adapters",
         "type": "array"
      }
   },
   "$defs": {
      "LogAdapterConfig": {
         "additionalProperties": true,
         "properties": {
            "name": {
               "default": "FileSystem",
               "description": "The name of the adapter.",
               "title": "Name",
               "type": "string"
            }
         },
         "title": "LogAdapterConfig",
         "type": "object"
      }
   }
}

Fields:
field adapters: List[LogAdapterConfig] [Optional]#

The list of tracing adapters to use. If not specified, the default adapters are used.

field enabled: bool = False#
pydantic model nemoguardrails.rails.llm.config.UserMessagesConfig#

Configuration for how the user messages are interpreted.

Show JSON schema
{
   "title": "UserMessagesConfig",
   "description": "Configuration for how the user messages are interpreted.",
   "type": "object",
   "properties": {
      "embeddings_only": {
         "default": false,
         "description": "Whether to use only embeddings for computing the user canonical form messages.",
         "title": "Embeddings Only",
         "type": "boolean"
      },
      "embeddings_only_similarity_threshold": {
         "anyOf": [
            {
               "maximum": 1.0,
               "minimum": 0.0,
               "type": "number"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The similarity threshold to use when using only embeddings for computing the user canonical form messages.",
         "title": "Embeddings Only Similarity Threshold"
      },
      "embeddings_only_fallback_intent": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "Defines the fallback intent when the similarity is below the threshold. If set to None, the user intent is computed normally using the LLM. If set to a string value, that string is used as the intent.",
         "title": "Embeddings Only Fallback Intent"
      }
   }
}

Fields:
field embeddings_only: bool = False#

Whether to use only embeddings for computing the user canonical form messages.

field embeddings_only_fallback_intent: str | None = None#

Defines the fallback intent when the similarity is below the threshold. If set to None, the user intent is computed normally using the LLM. If set to a string value, that string is used as the intent.

field embeddings_only_similarity_threshold: float | None = None#

The similarity threshold to use when using only embeddings for computing the user canonical form messages.

Constraints:
  • ge = 0

  • le = 1

nemoguardrails.rails.llm.config.merge_two_dicts(
dict_1: dict,
dict_2: dict,
ignore_keys: Set[str],
) None#

Merges the fields of two dictionaries recursively.