module nemoguardrails.rails.llm.llmrails#

LLM Rails entry point.

Global Variables#

  • explain_info_var

  • streaming_handler_var


class LLMRails#

Rails based on a given configuration.

method LLMRails.__init__#

__init__(
    config: nemoguardrails.rails.llm.config.RailsConfig,
    llm: Optional[langchain.llms.base.BaseLLM] = None,
    verbose: bool = False
)

Initializes the LLMRails instance.

Args:

  • config: A rails configuration.

  • llm: An optional LLM engine to use.

  • verbose: Whether the logging should be verbose or not.


method LLMRails.explain#

explain()  ExplainInfo

Helper function to return the latest ExplainInfo object.


method LLMRails.generate#

generate(prompt: Optional[str] = None, messages: Optional[List[dict]] = None)

Synchronous version of generate_async.


method LLMRails.generate_async#

generate_async(
    prompt: Optional[str] = None,
    messages: Optional[List[dict]] = None,
    streaming_handler: Optional[nemoguardrails.streaming.StreamingHandler] = None
)  Union[str, dict]

Generate a completion or a next message.

The format for messages is the following:

     [
         {"role": "context", "content": {"user_name": "John"}},
         {"role": "user", "content": "Hello! How are you?"},
         {"role": "assistant", "content": "I am fine, thank you!"},
         {"role": "event", "event": {"type": "UserSilent"}},
         ...
     ]

Args:

  • prompt: The prompt to be used for completion.

  • messages: The history of messages to be used to generate the next message.

  • streaming_handler: If specified, and the config supports streaming, the provided handler will be used for streaming.

Returns: The completion (when a prompt is provided) or the next message.

System messages are not yet supported.


method LLMRails.generate_events#

generate_events(events: List[dict])  List[dict]

Synchronous version of LLMRails.generate_events_async.


method LLMRails.generate_events_async#

generate_events_async(events: List[dict])  List[dict]

Generate the next events based on the provided history.

The format for events is the following:

     [
         {"type": "...", ...},
         ...
     ]

Args:

  • events: The history of events to be used to generate the next events.

Returns: The newly generate event(s).


method LLMRails.register_action#

register_action(
    action: <built-in function callable>,
    name: Optional[str] = None
)

Register a custom action for the rails configuration.


method LLMRails.register_action_param#

register_action_param(name: str, value: Any)

Registers a custom action parameter.


method LLMRails.register_embedding_search_provider#

register_embedding_search_provider(
    name: str,
    cls: Type[nemoguardrails.embeddings.index.EmbeddingsIndex]
)  None

Register a new embedding search provider.

Args:

  • name: The name of the embedding search provider that will be used.

  • cls: The class that will be used to generate and search embedding


method LLMRails.register_filter#

register_filter(
    filter_fn: <built-in function callable>,
    name: Optional[str] = None
)

Register a custom filter for the rails configuration.


method LLMRails.register_output_parser#

register_output_parser(output_parser: <built-in function callable>, name: str)

Register a custom output parser for the rails configuration.


method LLMRails.register_prompt_context#

register_prompt_context(name: str, value_or_fn: Any)

Register a value to be included in the prompt context.

name:

The name of the variable or function that will be used. :value_or_fn: The value or function that will be used to generate the value.


method LLMRails.stream_async#

stream_async(
    prompt: Optional[str] = None,
    messages: Optional[List[dict]] = None
)  AsyncIterator[str]

Simplified interface for getting directly the streamed tokens from the LLM.