Use a different text splitter to improve results. Ingest takes an argument pointing to the doc to ingest.

This commit is contained in:
Iván Martínez 2023-05-05 17:32:31 +02:00
parent a05186b598
commit 92244a90b4
2 changed files with 6 additions and 6 deletions

View File

@ -20,13 +20,12 @@ This repo uses a [state of the union transcript](https://github.com/imartinez/pr
## Instructions for ingesting your own dataset ## Instructions for ingesting your own dataset
Place your .txt file in `source_documents` folder. Get your .txt file ready.
Edit `ingest.py` loader to point it to your document.
Run the following command to ingest the data. Run the following command to ingest the data.
```shell ```shell
python ingest.py python ingest.py <path_to_your_txt_file>
``` ```
It will create a `db` folder containing the local vectorstore. Will take time, depending on the size of your document. It will create a `db` folder containing the local vectorstore. Will take time, depending on the size of your document.

View File

@ -1,13 +1,14 @@
from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma from langchain.vectorstores import Chroma
from langchain.embeddings import LlamaCppEmbeddings from langchain.embeddings import LlamaCppEmbeddings
from sys import argv
def main(): def main():
# Load document and split in chunks # Load document and split in chunks
loader = TextLoader('./source_documents/state_of_the_union.txt', encoding='utf8') loader = TextLoader(argv[1], encoding="utf8")
documents = loader.load() documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0) text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=50)
texts = text_splitter.split_documents(documents) texts = text_splitter.split_documents(documents)
# Create embeddings # Create embeddings
llama = LlamaCppEmbeddings(model_path="./models/ggml-model-q4_0.bin") llama = LlamaCppEmbeddings(model_path="./models/ggml-model-q4_0.bin")