private-gpt/private_gpt
Gianni Acquisto 9c192ddd73
Added max_new_tokens as a config option to llm yaml block (#1317)
* added max_new_tokens as a configuration option to the llm block in settings

* Update fern/docs/pages/manual/settings.mdx

Co-authored-by: lopagela <lpglm@orange.fr>

* Update private_gpt/settings/settings.py

Add default value for max_new_tokens = 256

Co-authored-by: lopagela <lpglm@orange.fr>

* Addressed location of docs comment

* reformatting from running 'make check'

* remove default config value from settings.yaml

---------

Co-authored-by: lopagela <lpglm@orange.fr>
2023-11-26 19:17:29 +01:00
..
components Added max_new_tokens as a config option to llm yaml block (#1317) 2023-11-26 19:17:29 +01:00
open_ai Update poetry lock (#1209) 2023-11-11 22:44:19 +01:00
server Ingestion Speedup Multiple strategy (#1309) 2023-11-25 20:12:09 +01:00
settings Added max_new_tokens as a config option to llm yaml block (#1317) 2023-11-26 19:17:29 +01:00
ui Ingestion Speedup Multiple strategy (#1309) 2023-11-25 20:12:09 +01:00
utils Next version of PrivateGPT (#1077) 2023-10-19 16:04:35 +02:00
__init__.py feat: Disable Gradio Analytics (#1165) 2023-11-06 14:31:26 +01:00
__main__.py fix: Remove global state (#1216) 2023-11-12 22:20:36 +01:00
constants.py Next version of PrivateGPT (#1077) 2023-10-19 16:04:35 +02:00
di.py fix: Remove global state (#1216) 2023-11-12 22:20:36 +01:00
launcher.py fix: Remove global state (#1216) 2023-11-12 22:20:36 +01:00
main.py fix: Remove global state (#1216) 2023-11-12 22:20:36 +01:00
paths.py fix: Remove global state (#1216) 2023-11-12 22:20:36 +01:00