38 lines
1.1 KiB
Plaintext
38 lines
1.1 KiB
Plaintext
## Local Installation steps
|
|
|
|
The steps in `Installation and Settings` section are better explained and cover more
|
|
setup scenarios. But if you are looking for a quick setup guide, here it is:
|
|
|
|
```bash
|
|
# Clone the repo
|
|
git clone https://github.com/imartinez/privateGPT
|
|
cd privateGPT
|
|
|
|
# Install Python 3.11
|
|
pyenv install 3.11
|
|
pyenv local 3.11
|
|
|
|
# Install dependencies
|
|
poetry install --with ui,local
|
|
|
|
# Download Embedding and LLM models
|
|
poetry run python scripts/setup
|
|
|
|
# (Optional) For Mac with Metal GPU, enable it. Check Installation and Settings section
|
|
to know how to enable GPU on other platforms
|
|
CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python
|
|
|
|
# Run the local server
|
|
PGPT_PROFILES=local make run
|
|
|
|
# Note: on Mac with Metal you should see a ggml_metal_add_buffer log, stating GPU is
|
|
being used
|
|
|
|
# Navigate to the UI and try it out!
|
|
http://localhost:8001/
|
|
```
|
|
|
|
## API
|
|
|
|
As explained in the introduction, the API contains high level APIs (ingestion and chat/completions) and low level APIs
|
|
(embeddings and chunk retrieval). In this section the different specific API calls are explained. |