Update README.md
This commit is contained in:
parent
24e464f51b
commit
9c3832c156
26
README.md
26
README.md
|
@ -27,14 +27,6 @@ MODEL_N_CTX: Maximum token limit for both embeddings and LLM models
|
|||
|
||||
Note: because of the way `langchain` loads the `LLAMMA` embeddings, you need to specify the absolute path of your embeddings model binary. This means it will not work if you use a home directory shortcut (eg. `~/` or `$HOME/`).
|
||||
|
||||
## Setup C++ Compiler
|
||||
If you encountered an error with `pip install`, you might need to install a C++ compiler on your computer.
|
||||
|
||||
### For Windows 10/11
|
||||
Install Visual Studio 2022 along with nessesary components: Universal Windows Platform development, C++ CMake tools for Windows
|
||||
Download the MinGW installer from the [MinGW website](https://sourceforge.net/projects/mingw/)
|
||||
Run the installer and select the "gcc" component
|
||||
|
||||
## Test dataset
|
||||
This repo uses a [state of the union transcript](https://github.com/imartinez/privateGPT/blob/main/source_documents/state_of_the_union.txt) as an example.
|
||||
|
||||
|
@ -80,5 +72,23 @@ Selecting the right local models and the power of `LangChain` you can run the en
|
|||
- `privateGPT.py` uses a local LLM based on `GPT4All-J` or `LlamaCpp` to understand questions and create answers. The context for the answers is extracted from the local vector store using a similarity search to locate the right piece of context from the docs.
|
||||
- `GPT4All-J` wrapper was introduced in LangChain 0.0.162.
|
||||
|
||||
# System Requirements
|
||||
|
||||
## Python Version
|
||||
To use this software, you must have Python 3.10 or later installed. Earlier versions of Python will not compile.
|
||||
|
||||
## C++ Compiler
|
||||
If you encounter an error while building a wheel during the `pip install` process, you may need to install a C++ compiler on your computer.
|
||||
|
||||
### For Windows 10/11
|
||||
To install a C++ compiler on Windows 10/11, follow these steps:
|
||||
|
||||
1. Install Visual Studio 2022.
|
||||
2. Make sure the following components are selected:
|
||||
* Universal Windows Platform development
|
||||
* C++ CMake tools for Windows
|
||||
3. Download the MinGW installer from the [MinGW website](https://sourceforge.net/projects/mingw/).
|
||||
4. Run the installer and select the "gcc" component.
|
||||
|
||||
# Disclaimer
|
||||
This is a test project to validate the feasibility of a fully private solution for question answering using LLMs and Vector embeddings. It is not production ready, and it is not meant to be used in production. The models selection is not optimized for performance, but for privacy; but it is possible to use different models and vectorstores to improve performance.
|
||||
|
|
Loading…
Reference in New Issue