From 2dac62c5aa43ac0cb1994b83b5cfa29cdc37f2f3 Mon Sep 17 00:00:00 2001 From: Koushik Date: Sun, 14 May 2023 10:26:13 +0530 Subject: [PATCH] fixed a typo --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 32a7693..94b779b 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,7 @@ MODEL_PATH: Path to your GPT4All or LlamaCpp supported LLM MODEL_N_CTX: Maximum token limit for both embeddings and LLM models ``` -Note: because of the way `langchain` loads the `LLAMMA` embeddings, you need to specify the absolute path of your embeddings model binary. This means it will not work if you use a home directory shortcut (eg. `~/` or `$HOME/`). +Note: because of the way `langchain` loads the `LLAMA` embeddings, you need to specify the absolute path of your embeddings model binary. This means it will not work if you use a home directory shortcut (eg. `~/` or `$HOME/`). ## Test dataset This repo uses a [state of the union transcript](https://github.com/imartinez/privateGPT/blob/main/source_documents/state_of_the_union.txt) as an example.