Build a RAG using a locally hosted NIM

In this notebook we demonstrate how to build a RAG using NVIDIA Inference Microservices (NIM). We locally host a Llama3-8b-instruct NIM and deploy it using NVIDIA AI Endpoints for LangChain.

We then create a vector store by downloading web pages and generating their embeddings using FAISS. We then showcase two different chat chains for querying the vector store. For this example, we use the NVIDIA Triton documentation website, though the code can be easily modified to use any other source.

First stage is to load NVIDIA Triton documentation from the web, chunkify the data, and generate embeddings using FAISS

To get started:

  1. Generate a NGC API here

  2. Export the API key (export NGC_API_KEY=) This key will need to be passed to docker run in the next section as the NGC_API_KEY environment variable to download the appropriate models and resources when starting the NIM.

  3. Download and install the NGC CLI following the NGC Setup steps. Follow the steps on that page to set the NGC CLI and docker client configs appropriately.

  4. To pull the NIM container image from NGC, first authenticate with the NVIDIA Container Registry with the following command

(Note: In order to run this notebook in a virtual environment, you need to launch the NIM Docker container in the background outside of the notebook environment prior to running the LangChain code in the notebook cells. Create a virtual environment and install the dependencies present inside the notebooks/requirements.txt file by pip install -r notebooks/requirements.txt. Run the commands in the first 3 cells from a terminal then begin with the 4th cell (curl inference command) within the notebook environment.)

!echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin

Set up location for caching the model artifacts

!export LOCAL_NIM_CACHE=~/.cache/nim
!mkdir -p "$LOCAL_NIM_CACHE"
!chmod 777 "$LOCAL_NIM_CACHE"

Launch the NIM microservice

!docker run -d --name meta-llama3-8b-instruct --gpus all -e NGC_API_KEY -v "$LOCAL_NIM_CACHE:/opt/nim/.cache" -u $(id -u) -p 8000:8000 nvcr.io/nim/meta/llama3-8b-instruct:1.0.0

Before we continue and connect the NIM to LangChain, let’s test it using a simple OpenAI completion request

!curl -X 'POST' \
    "http://0.0.0.0:8000/v1/completions" \
    -H "accept: application/json" \
    -H "Content-Type: application/json" \
    -d '{"model": "meta/llama3-8b-instruct", "prompt": "Once upon a time", "max_tokens": 64}'

Now setup the LangChain flow by installing prerequisite libraries

!pip install langchain
!pip install langchain_nvidia_ai_endpoints
!pip install faiss-cpu

Set up API key, which you can get from the API Catalog

import getpass
import os

if not os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
    nvapi_key = getpass.getpass("Enter your NVIDIA API key: ")
    assert nvapi_key.startswith("nvapi-"), f"{nvapi_key[:5]}... is not a valid key"
    os.environ["NVIDIA_API_KEY"] = nvapi_key

We can now deploy the NIM in LangChain by specifying the base URL

from langchain_nvidia_ai_endpoints import ChatNVIDIA

llm = ChatNVIDIA(base_url="http://0.0.0.0:8000/v1", model="meta/llama3-8b-instruct", temperature=0.1, max_tokens=1000, top_p=1.0)

result = llm.invoke("What is the capital of France?")
print(result.content)
import os
from langchain.chains import ConversationalRetrievalChain, LLMChain
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT
from langchain.chains.question_answering import load_qa_chain
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import FAISS
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_nvidia_ai_endpoints import ChatNVIDIA
from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings

Helper functions for loading html files, which we’ll use to generate the embeddings. We’ll use this later to load the relevant html documents from the Triton documentation website and convert to a vector store.

import re
from typing import List, Union

import requests
from bs4 import BeautifulSoup

def html_document_loader(url: Union[str, bytes]) -> str:
    """
    Loads the HTML content of a document from a given URL and return it's content.

    Args:
        url: The URL of the document.

    Returns:
        The content of the document.

    Raises:
        Exception: If there is an error while making the HTTP request.

    """
    try:
        response = requests.get(url)
        html_content = response.text
    except Exception as e:
        print(f"Failed to load {url} due to exception {e}")
        return ""

    try:
        # Create a Beautiful Soup object to parse html
        soup = BeautifulSoup(html_content, "html.parser")

        # Remove script and style tags
        for script in soup(["script", "style"]):
            script.extract()

        # Get the plain text from the HTML document
        text = soup.get_text()

        # Remove excess whitespace and newlines
        text = re.sub("\s+", " ", text).strip()

        return text
    except Exception as e:
        print(f"Exception {e} while loading document")
        return ""

Read html files and split text in preparation for embedding generation Note chunk_size value must match the specific LLM used for embedding genetation

Make sure to pay attention to the chunk_size parameter in TextSplitter. Setting the right chunk size is critical for RAG performance, as much of a RAG’s success is based on the retrieval step finding the right context for generation. The entire prompt (retrieved chunks + user query) must fit within the LLM’s context window. Therefore, you should not specify chunk sizes too big, and balance them out with the estimated query size. For example, while OpenAI LLMs have a context window of 8k-32k tokens, Llama3 is limited to 8k tokens. Experiment with different chunk sizes, but typical values should be 100-600, depending on the LLM.

def create_embeddings(embedding_path: str = "./embed"):

    embedding_path = "./embed"
    print(f"Storing embeddings to {embedding_path}")

    # List of web pages containing NVIDIA Triton technical documentation
    urls = [
         "https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html",
         "https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/getting_started/quickstart.html",
         "https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/user_guide/model_repository.html",
         "https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/user_guide/model_analyzer.html",
         "https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/user_guide/architecture.html",
    ]

    documents = []
    for url in urls:
        document = html_document_loader(url)
        documents.append(document)


    text_splitter = RecursiveCharacterTextSplitter(
        chunk_size=1000,
        chunk_overlap=0,
        length_function=len,
    )
    texts = text_splitter.create_documents(documents)
    index_docs(url, text_splitter, texts, embedding_path)
    print("Generated embedding successfully")

Generate embeddings using NVIDIA Retrieval QA Embedding NIM and NVIDIA AI Endpoints for LangChain and save embeddings to offline vector store in the /embed directory for future re-use

def index_docs(url: Union[str, bytes], splitter, documents: List[str], dest_embed_dir) -> None:
    """
    Split the document into chunks and create embeddings for the document

    Args:
        url: Source url for the document.
        splitter: Splitter used to split the document
        documents: list of documents whose embeddings needs to be created
        dest_embed_dir: destination directory for embeddings

    Returns:
        None
    """
    embeddings = NVIDIAEmbeddings(model="ai-embed-qa-4", truncate="END")

    for document in documents:
        texts = splitter.split_text(document.page_content)

        # metadata to attach to document
        metadatas = [document.metadata]

        # create embeddings and add to vector store
        if os.path.exists(dest_embed_dir):
            update = FAISS.load_local(folder_path=dest_embed_dir, embeddings=embeddings)
            update.add_texts(texts, metadatas=metadatas)
            update.save_local(folder_path=dest_embed_dir)
        else:
            docsearch = FAISS.from_texts(texts, embedding=embeddings, metadatas=metadatas)
            docsearch.save_local(folder_path=dest_embed_dir)

Second stage is to load the embeddings from the vector store and build a RAG using NVIDIAEmbeddings

Create the embeddings model using NVIDIA Retrieval QA Embedding NIM from the API Catalog. This model represents words, phrases, or other entities as vectors of numbers and understands the relation between words and phrases. See here for reference: https://build.nvidia.com/nvidia/embed-qa-4

create_embeddings()

embedding_model = NVIDIAEmbeddings(model="ai-embed-qa-4", truncate="END")

Load documents from vector database using FAISS

# Embed documents
embedding_path = "embed/"
docsearch = FAISS.load_local(folder_path=embedding_path, embeddings=embedding_model)

Create a ConversationalRetrievalChain chain using a local NIM. We’ll use the Llama3 8B NIM we created and deployed locally, add memory for chat history, and connect to the vector store via the embedding model. See here for reference: https://python.langchain.com/docs/modules/chains/popular/chat_vector_db#conversationalretrievalchain-with-streaming-to-stdout

llm = ChatNVIDIA(base_url="http://0.0.0.0:8000/v1", model="meta/llama3-8b-instruct", temperature=0.1, max_tokens=1000, top_p=1.0)

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

qa_prompt=QA_PROMPT

doc_chain = load_qa_chain(llm, chain_type="stuff", prompt=QA_PROMPT)

qa = ConversationalRetrievalChain.from_llm(
    llm=llm,
    retriever=docsearch.as_retriever(),
    chain_type="stuff",
    memory=memory,
    combine_docs_chain_kwargs={'prompt': qa_prompt},
)

Now try asking a question about Triton with the simpler chain. Compare the answer to the result with previous complex chain model

query = "What is Triton?"
result = qa({"question": query})
print(result.get("answer"))

Ask another question about Triton

query = "Does Triton support ONNX?"
result = qa({"question": query})
print(result.get("answer"))

Finally showcase chat capabilites by asking a question about the previous query

query = "But why?"
result = qa({"question": query})
print(result.get("answer"))