private-gpt/private_gpt/components
icsy7867 e21bf20c10
feat: prompt_style applied to all LLMs + extra LLM params. (#1835)
* Updated prompt_style to be moved to the main LLM setting since all LLMs from llama_index can utilize this.  I also included temperature, context window size, max_tokens, max_new_tokens into the openailike to help ensure the settings are consistent from the other implementations.

* Removed prompt_style from llamacpp entirely

* Fixed settings-local.yaml to include prompt_style in the LLM settings instead of llamacpp.
2024-04-30 09:53:10 +02:00
..
embedding feat(llm): Ollama LLM-Embeddings decouple + longer keep_alive settings (#1800) 2024-04-02 16:23:10 +02:00
ingest feat(ingest): Created a faster ingestion mode - pipeline (#1750) 2024-03-19 21:24:46 +01:00
llm feat: prompt_style applied to all LLMs + extra LLM params. (#1835) 2024-04-30 09:53:10 +02:00
node_store feat(nodestore): add Postgres for the doc and index store (#1706) 2024-03-14 17:12:33 +01:00
vector_store feat: unify settings for vector and nodestore connections to PostgreSQL (#1730) 2024-03-15 09:55:17 +01:00
__init__.py Next version of PrivateGPT (#1077) 2023-10-19 16:04:35 +02:00