Commit Graph

245 Commits

Author SHA1 Message Date
Arun Yadav 821bca32e9
feat(local): tiktoken cache within repo for offline (#1467) 2024-03-11 22:55:13 +01:00
icsy7867 02dc83e8e9
feat(llm): adds serveral settings for llamacpp and ollama (#1703) 2024-03-11 22:51:05 +01:00
Hoffelhas 410bf7a71f
feat(ui): maintain score order when curating sources (#1643)
* Update ui.py

Changed 'curated_sources' from a list, in order to maintain score order when returning the curated sources.

* Maintain score order after curating sources
2024-03-11 22:27:30 +01:00
icsy7867 290b9fb084
feat(ui): add sources check to not repeat identical sources (#1705) 2024-03-11 22:24:18 +01:00
github-actions[bot] 1b03b369c0
chore(main): release 0.4.0 (#1628)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-03-06 17:53:35 +01:00
Iván Martínez 45f05711eb
feat: Upgrade to LlamaIndex to 0.10 (#1663)
* Extract optional dependencies

* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity

* Support Ollama embeddings

* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine

* Fix vector retriever filters
2024-03-06 17:51:30 +01:00
Daniel Gallego Vico 12f3a39e8a
Update x handle to zylon private gpt (#1644) 2024-02-23 15:51:35 +01:00
TQ cd40e3982b
feat(Vector): support pgvector (#1624) 2024-02-20 15:29:26 +01:00
github-actions[bot] 066ea5bf28
chore(main): release 0.3.0 (#1413)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-02-16 17:42:39 +01:00
Iván Martínez aa13afde07
feat(UI): Select file to Query or Delete + Delete ALL (#1612)
---------

Co-authored-by: Robin Boone <rboone@sofics.com>
2024-02-16 17:36:09 +01:00
icsy7867 24fb80ca38
fix(UI): Updated ui.py. Frees up the CPU to not be bottlenecked.
Updated ui.py to include a small sleep timer while building the stream deltas.  This recursive function fires off so quickly to eats up too much of the CPU.  This small sleep frees up the CPU to not be bottlenecked.  This value can go lower/shorter.  But 0.02 or 0.025 seems to work well. (#1589)

Co-authored-by: root <root@wesgitlabdemo.icl.gtri.org>
2024-02-16 12:52:14 +01:00
Ygal Blum 6bbec79583
feat(llm): Add support for Ollama LLM (#1526) 2024-02-09 15:50:50 +01:00
Nick Smirnov b178b51451
feat(bulk-ingest): Add --ignored Flag to Exclude Specific Files and Directories During Ingestion (#1432) 2024-02-07 19:59:32 +01:00
Iván Martínez 24fae660e6
feat: Add stream information to generate SDKs (#1569) 2024-02-02 16:14:22 +01:00
Pablo Orgaz 3e67e21d38
Add embedding mode config (#1541) 2024-01-25 10:55:32 +01:00
Naveen Kannan 869233f0e4
fix: Adding an LLM param to fix broken generator from llamacpp (#1519) 2024-01-17 18:10:45 +01:00
CognitiveTech e326126d0d
feat: add mistral + chatml prompts (#1426) 2024-01-16 22:51:14 +01:00
Robert Gay 6191bcdbd6
fix: minor bug in chat stream output - python error being serialized (#1449) 2024-01-16 16:41:20 +01:00
Iván Martínez d3acd85fe3
fix(tests): load the test settings only when running tests
Previous implementation causes false positives with the last version of LlamaIndex
2024-01-09 12:03:16 +01:00
Guido Schulz 0a89d76cc5
fix(docs): Update quickstart doc and set version in pyproject.toml to 0.2.0 2023-12-26 13:09:31 +01:00
Matthew Hill 2d27a9f956
feat(llm): Add openailike llm mode (#1447)
This mode behaves the same as the openai mode, except that it allows setting custom models not
supported by OpenAI. It can be used with any tool that serves models from an OpenAI compatible API.

Implements #1424
2023-12-26 10:26:08 +01:00
imartinez fee9f08ef3 Move back to 3900 for the context window to avoid melting local machines 2023-12-22 18:21:43 +01:00
Iván Martínez fde2b942bc
fix(deploy): fix local and external dockerfiles 2023-12-22 14:16:46 +01:00
Iván Martínez 4c69c458ab
Improve ingest logs (#1438) 2023-12-21 17:13:46 +01:00
Iván Martínez 4780540870
feat(settings): Configurable context_window and tokenizer (#1437) 2023-12-21 14:49:35 +01:00
Iván Martínez 6eeb95ec7f
feat(API): Ingest plain text (#1417)
* Add ingest/text route to ingest plain text

* Add new ingest text test and adapt ingest/file ones

* Include new API in docs

* Remove duplicated logic
2023-12-18 21:47:05 +01:00
Pablo Orgaz 059f35840a
fix(docker): docker broken copy (#1419) 2023-12-18 16:55:18 +01:00
Iván Martínez 8ec7cf49f4
feat(settings): Update default model to TheBloke/Mistral-7B-Instruct-v0.2-GGUF (#1415)
* Update LlamaCPP dependency

* Default to TheBloke/Mistral-7B-Instruct-v0.2-GGUF

* Fix API docs
2023-12-17 16:11:08 +01:00
Rohit Das c71ae7cee9
feat(ui): make chat area stretch to fill the screen (#1397) 2023-12-17 12:02:13 +01:00
cognitivetech 2564f8d2bb
fix(settings): correct yaml multiline string (#1403) 2023-12-16 19:02:46 +01:00
Eliott Bouhana 4e496e970a
docs: remove misleading comment about pgpt working with python 3.12 (#1394)
I was misled into believing I could install using python 3.12 whereas the pyproject.toml explicitly states otherwise. This PR only removes this comment to make sure other people are not also trapped 😄
2023-12-15 21:35:02 +01:00
Federico Grandi 3582764801
ci: fix preview docs checkout ref (#1393) 2023-12-12 20:33:34 +01:00
Federico Grandi 1d28ae2915
docs: fix minor capitalization typo (#1392) 2023-12-12 20:31:38 +01:00
github-actions[bot] e8ac51bba4
chore(main): release 0.2.0 (#1387)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-12-10 20:08:12 +01:00
3ly-13 145f3ec9f4
feat(ui): Allows User to Set System Prompt via "Additional Options" in Chat Interface (#1353) 2023-12-10 19:45:14 +01:00
3ly-13 a072a40a7c
Allow setting OpenAI model in settings (#1386)
feat(settings): Allow setting openai model to be used. Default to GPT 3.5
2023-12-09 20:13:00 +01:00
Louis Melchior a3ed14c58f
feat(llm): drop default_system_prompt (#1385)
As discussed on Discord, the decision has been made to remove the system prompts by default, to better segregate the API and the UI usages.

A concurrent PR (#1353) is enabling the dynamic setting of a system prompt in the UI.

Therefore, if UI users want to use a custom system prompt, they can specify one directly in the UI.
If the API users want to use a custom prompt, they can pass it directly into their messages that they are passing to the API.

In the highlight of the two use case above, it becomes clear that default system_prompt does not need to exist.
2023-12-08 23:13:51 +01:00
Iván Martínez f235c50be9
Delete old docs (#1384) 2023-12-08 22:39:23 +01:00
EEmlan 9302620eac
Adding german speaking model to documentation (#1374) 2023-12-08 11:26:25 +01:00
Max Zangs 9cf972563e
Add setup option to Makefile (#1368) 2023-12-08 10:34:12 +01:00
github-actions[bot] 3d301d0c6f
chore(main): release 0.1.0 (#1094)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-12-01 14:45:54 +01:00
lopagela 56af625d71
Fix the parallel ingestion mode, and make it available through conf (#1336)
* Fix the parallel ingestion mode, and make it available through conf

Also updated the documentation to show how to configure the ingest mode.

* PR feedback: redirect to documentation
2023-11-30 11:41:55 +01:00
Francisco García Sierra b7ca7d35a0
Update ingest api docs with Windows support (#1289) 2023-11-29 20:56:37 +01:00
ishaandatta 28d03fdda8
Adding working combination of LLM and Embedding Model to recipes (#1315)
Co-authored-by: ishaandatta <ishaandatta50@gmail.com>
2023-11-29 20:54:22 +01:00
Phi Long aabdb046ae
Add docker compose (#1277)
Co-authored-by: philongn <philongn@theugroup.co>
Co-authored-by: Pablo Orgaz <pabloogc@gmail.com>
2023-11-29 16:46:40 +01:00
Iván Martínez 64ed9cd872
Allow passing a system prompt (#1318) 2023-11-29 15:51:19 +01:00
Gianni Acquisto 9c192ddd73
Added max_new_tokens as a config option to llm yaml block (#1317)
* added max_new_tokens as a configuration option to the llm block in settings

* Update fern/docs/pages/manual/settings.mdx

Co-authored-by: lopagela <lpglm@orange.fr>

* Update private_gpt/settings/settings.py

Add default value for max_new_tokens = 256

Co-authored-by: lopagela <lpglm@orange.fr>

* Addressed location of docs comment

* reformatting from running 'make check'

* remove default config value from settings.yaml

---------

Co-authored-by: lopagela <lpglm@orange.fr>
2023-11-26 19:17:29 +01:00
Gianni Acquisto baf29f06fa
Adding docs about embeddings settings + adding the embedding.mode: local in mock profile (#1316) 2023-11-26 17:32:11 +01:00
lopagela bafdd3baf1
Ingestion Speedup Multiple strategy (#1309) 2023-11-25 20:12:09 +01:00
Iván Martínez 546ba33e6f
Update readme with supporters info (#1311) 2023-11-25 18:35:59 +01:00