Knowledge Base#
By default, an LLMRails
instance supports using a set of documents as context for generating the bot responses. To include documents as part of your knowledge base, you must place them in the kb
folder inside your config folder:
.
├── config
│ └── kb
│ ├── file_1.md
│ ├── file_2.md
│ └── ...
Currently, only the Markdown format is supported.
Document Structure#
Documents in the knowledge base kb
folder are automatically processed and indexed for retrieval. The system uses the configured embedding model to create vector representations of the document chunks, which are then stored for efficient similarity search.
Retrieval Process#
When a user query is received, the system:
Computes embeddings for the user query using the configured embedding model.
Performs similarity search against the indexed document chunks.
Retrieves the most relevant chunks based on similarity scores.
Makes the retrieved chunks available as
$relevant_chunks
in the context.Uses these chunks as additional context when generating the bot response.
Configuration#
The knowledge base functionality is automatically enabled when documents are present in the kb
folder. The system uses the same embedding model configuration specified in your config.yml
under the models
section. For embedding model configuration examples, refer to LLM Configuration.