private-gpt/private_gpt/components
Gianni Acquisto 9c192ddd73
Added max_new_tokens as a config option to llm yaml block (#1317)
* added max_new_tokens as a configuration option to the llm block in settings

* Update fern/docs/pages/manual/settings.mdx

Co-authored-by: lopagela <lpglm@orange.fr>

* Update private_gpt/settings/settings.py

Add default value for max_new_tokens = 256

Co-authored-by: lopagela <lpglm@orange.fr>

* Addressed location of docs comment

* reformatting from running 'make check'

* remove default config value from settings.yaml

---------

Co-authored-by: lopagela <lpglm@orange.fr>
2023-11-26 19:17:29 +01:00
..
embedding Ingestion Speedup Multiple strategy (#1309) 2023-11-25 20:12:09 +01:00
ingest Ingestion Speedup Multiple strategy (#1309) 2023-11-25 20:12:09 +01:00
llm Added max_new_tokens as a config option to llm yaml block (#1317) 2023-11-26 19:17:29 +01:00
node_store Endpoint to delete documents ingested (#1163) 2023-11-06 15:47:42 +01:00
vector_store Make qdrant the default vector db (#1285) 2023-11-20 16:19:22 +01:00
__init__.py Next version of PrivateGPT (#1077) 2023-10-19 16:04:35 +02:00