imartinez
f469b4619d
Add required Ollama setting
2024-04-02 18:27:57 +02:00
Robin Boone
b3b0140e24
feat(llm): Ollama LLM-Embeddings decouple + longer keep_alive settings ( #1800 )
2024-04-02 16:23:10 +02:00
Iván Martínez
6f6c785dac
feat(llm): Ollama timeout setting ( #1773 )
...
* added request_timeout to ollama, default set to 30.0 in settings.yaml and settings-ollama.yaml
* Update settings-ollama.yaml
* Update settings.yaml
* updated settings.py and tidied up settings-ollama-yaml
* feat(UI): Faster startup and document listing (#1763 )
* fix(ingest): update script label (#1770 )
huggingface -> Hugging Face
* Fix lint errors
---------
Co-authored-by: Stephen Gresham <steve@gresham.id.au>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
2024-03-20 21:33:46 +01:00
icsy7867
02dc83e8e9
feat(llm): adds serveral settings for llamacpp and ollama ( #1703 )
2024-03-11 22:51:05 +01:00
Iván Martínez
45f05711eb
feat: Upgrade to LlamaIndex to 0.10 ( #1663 )
...
* Extract optional dependencies
* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity
* Support Ollama embeddings
* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine
* Fix vector retriever filters
2024-03-06 17:51:30 +01:00