private-gpt/private_gpt
Iván Martínez 6f6c785dac
feat(llm): Ollama timeout setting (#1773)
* added request_timeout to ollama, default set to 30.0 in settings.yaml and settings-ollama.yaml

* Update settings-ollama.yaml

* Update settings.yaml

* updated settings.py and tidied up settings-ollama-yaml

* feat(UI): Faster startup and document listing (#1763)

* fix(ingest): update script label (#1770)

huggingface -> Hugging Face

* Fix lint errors

---------

Co-authored-by: Stephen Gresham <steve@gresham.id.au>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
2024-03-20 21:33:46 +01:00
..
components feat(llm): Ollama timeout setting (#1773) 2024-03-20 21:33:46 +01:00
open_ai feat: Upgrade to LlamaIndex to 0.10 (#1663) 2024-03-06 17:51:30 +01:00
server feat(UI): Faster startup and document listing (#1763) 2024-03-20 19:11:44 +01:00
settings feat(llm): Ollama timeout setting (#1773) 2024-03-20 21:33:46 +01:00
ui feat(llm): adds serveral settings for llamacpp and ollama (#1703) 2024-03-11 22:51:05 +01:00
utils feat(ingest): Created a faster ingestion mode - pipeline (#1750) 2024-03-19 21:24:46 +01:00
__init__.py feat(local): tiktoken cache within repo for offline (#1467) 2024-03-11 22:55:13 +01:00
__main__.py fix: Remove global state (#1216) 2023-11-12 22:20:36 +01:00
constants.py Next version of PrivateGPT (#1077) 2023-10-19 16:04:35 +02:00
di.py fix: Remove global state (#1216) 2023-11-12 22:20:36 +01:00
launcher.py feat(llm): adds serveral settings for llamacpp and ollama (#1703) 2024-03-11 22:51:05 +01:00
main.py feat: Upgrade to LlamaIndex to 0.10 (#1663) 2024-03-06 17:51:30 +01:00
paths.py fix: Remove global state (#1216) 2023-11-12 22:20:36 +01:00