Use a different text splitter to improve results. Ingest takes an argument pointing to the doc to ingest.
This commit is contained in:
		
							parent
							
								
									a05186b598
								
							
						
					
					
						commit
						92244a90b4
					
				|  | @ -20,13 +20,12 @@ This repo uses a [state of the union transcript](https://github.com/imartinez/pr | |||
| 
 | ||||
| ## Instructions for ingesting your own dataset | ||||
| 
 | ||||
| Place your .txt file in `source_documents` folder. | ||||
| Edit `ingest.py` loader to point it to your document. | ||||
| Get your .txt file ready. | ||||
| 
 | ||||
| Run the following command to ingest the data. | ||||
| 
 | ||||
| ```shell | ||||
| python ingest.py | ||||
| python ingest.py <path_to_your_txt_file> | ||||
| ``` | ||||
| 
 | ||||
| It will create a `db` folder containing the local vectorstore. Will take time, depending on the size of your document. | ||||
|  |  | |||
|  | @ -1,13 +1,14 @@ | |||
| from langchain.document_loaders import TextLoader | ||||
| from langchain.text_splitter import RecursiveCharacterTextSplitter | ||||
| from langchain.text_splitter import CharacterTextSplitter | ||||
| from langchain.vectorstores import Chroma | ||||
| from langchain.embeddings import LlamaCppEmbeddings | ||||
| from sys import argv | ||||
| 
 | ||||
| def main(): | ||||
|     # Load document and split in chunks | ||||
|     loader = TextLoader('./source_documents/state_of_the_union.txt', encoding='utf8') | ||||
|     loader = TextLoader(argv[1], encoding="utf8") | ||||
|     documents = loader.load() | ||||
|     text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0) | ||||
|     text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=50) | ||||
|     texts = text_splitter.split_documents(documents) | ||||
|     # Create embeddings | ||||
|     llama = LlamaCppEmbeddings(model_path="./models/ggml-model-q4_0.bin") | ||||
|  |  | |||
		Loading…
	
		Reference in New Issue