c7212ac7cc 
								
							 
						 
						
							
							
								
								fix(LLM): mistral ignoring assistant messages ( #1954 )  
							
							... 
							
							
							
							* fix: mistral ignoring assistant messages
* fix: typing
* fix: fix tests 
							
						 
						
							2024-05-30 15:41:16 +02:00  
				
					
						
							
							
								 
						
							
								3b3e96ad6c 
								
							 
						 
						
							
							
								
								Allow parameterizing OpenAI embeddings component (api_base, key, model) ( #1920 )  
							
							... 
							
							
							
							* Allow parameterizing OpenAI embeddings component (api_base, key, model)
* Update settings
* Update description 
							
						 
						
							2024-05-17 09:52:50 +02:00  
				
					
						
							
							
								 
						
							
								45df99feb7 
								
							 
						 
						
							
							
								
								Add timeout parameter for better support of openailike LLM tools on local computer (like LM Studio). ( #1858 )  
							
							... 
							
							
							
							feat(llm): Improve settings of the OpenAILike LLM 
							
						 
						
							2024-05-10 16:44:08 +02:00  
				
					
						
							
							
								 
						
							
								966af4771d 
								
							 
						 
						
							
							
								
								fix(settings): enable cors by default so it will work when using ts sdk (spa) ( #1925 )  
							
							
							
						 
						
							2024-05-10 14:13:46 +02:00  
				
					
						
							
							
								 
						
							
								d13029a046 
								
							 
						 
						
							
							
								
								feat(docs): add privategpt-ts sdk ( #1924 )  
							
							
							
						 
						
							2024-05-10 14:13:15 +02:00  
				
					
						
							
							
								 
						
							
								9d0d614706 
								
							 
						 
						
							
							
								
								fix: Replacing unsafe `eval()` with `json.loads()` ( #1890 )  
							
							
							
						 
						
							2024-04-30 09:58:19 +02:00  
				
					
						
							
							
								 
						
							
								e21bf20c10 
								
							 
						 
						
							
							
								
								feat: prompt_style applied to all LLMs + extra LLM params. ( #1835 )  
							
							... 
							
							
							
							* Updated prompt_style to be moved to the main LLM setting since all LLMs from llama_index can utilize this.  I also included temperature, context window size, max_tokens, max_new_tokens into the openailike to help ensure the settings are consistent from the other implementations.
* Removed prompt_style from llamacpp entirely
* Fixed settings-local.yaml to include prompt_style in the LLM settings instead of llamacpp. 
							
						 
						
							2024-04-30 09:53:10 +02:00  
				
					
						
							
							
								 
						
							
								c1802e7cf0 
								
							 
						 
						
							
							
								
								fix(docs): Update installation.mdx ( #1866 )  
							
							... 
							
							
							
							Update repo url 
							
						 
						
							2024-04-19 17:10:58 +02:00  
				
					
						
							
							
								 
						
							
								2a432bf9c5 
								
							 
						 
						
							
							
								
								fix: make embedding_api_base match api_base when on docker ( #1859 )  
							
							
							
						 
						
							2024-04-19 15:42:19 +02:00  
				
					
						
							
							
								 
						
							
								947e737f30 
								
							 
						 
						
							
							
								
								fix: "no such group" error in Dockerfile, added docx2txt and cryptography deps ( #1841 )  
							
							... 
							
							
							
							* Fixed "no such group" error in Dockerfile, added docx2txt to poetry so docx parsing works out of the box for docker containers
* added cryptography dependency for pdf parsing 
							
						 
						
							2024-04-19 15:40:00 +02:00  
				
					
						
							
							
								 
						
							
								49ef729abc 
								
							 
						 
						
							
							
								
								Allow passing HF access token to download tokenizer. Fallback to default tokenizer.  
							
							
							
						 
						
							2024-04-19 15:38:25 +02:00  
				
					
						
							
							
								 
						
							
								347be643f7 
								
							 
						 
						
							
							
								
								fix(llm): special tokens and leading space ( #1831 )  
							
							
							
						 
						
							2024-04-04 14:37:29 +02:00  
				
					
						
							
							
								 
						
							
								08c4ab175e 
								
							 
						 
						
							
							
								
								Fix version in poetry  
							
							
							
						 
						
							2024-04-03 10:59:35 +02:00  
				
					
						
							
							
								 
						
							
								f469b4619d 
								
							 
						 
						
							
							
								
								Add required Ollama setting  
							
							
							
						 
						
							2024-04-02 18:27:57 +02:00  
				
					
						
							
							
								 
						
							
								94ef38cbba 
								
							 
						 
						
							
							
								
								chore(main): release 0.5.0 ( #1708 )  
							
							... 
							
							
							
							Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> 
							
						 
						
							2024-04-02 17:45:15 +02:00  
				
					
						
							
							
								 
						
							
								8a836e4651 
								
							 
						 
						
							
							
								
								feat(docs): Add guide Llama-CPP Linux AMD GPU support ( #1782 )  
							
							
							
						 
						
							2024-04-02 16:55:05 +02:00  
				
					
						
							
							
								 
						
							
								f0b174c097 
								
							 
						 
						
							
							
								
								feat(ui): Add Model Information to ChatInterface label  
							
							
							
						 
						
							2024-04-02 16:52:27 +02:00  
				
					
						
							
							
								 
						
							
								bac818add5 
								
							 
						 
						
							
							
								
								feat(code): improve concat of strings in ui ( #1785 )  
							
							
							
						 
						
							2024-04-02 16:42:40 +02:00  
				
					
						
							
							
								 
						
							
								ea153fb92f 
								
							 
						 
						
							
							
								
								feat(scripts): Wipe qdrant and obtain db Stats command ( #1783 )  
							
							
							
						 
						
							2024-04-02 16:41:42 +02:00  
				
					
						
							
							
								 
						
							
								b3b0140e24 
								
							 
						 
						
							
							
								
								feat(llm): Ollama LLM-Embeddings decouple + longer keep_alive settings ( #1800 )  
							
							
							
						 
						
							2024-04-02 16:23:10 +02:00  
				
					
						
							
							
								 
						
							
								83adc12a8e 
								
							 
						 
						
							
							
								
								feat(RAG): Introduce SentenceTransformer Reranker ( #1810 )  
							
							
							
						 
						
							2024-04-02 10:29:51 +02:00  
				
					
						
							
							
								 
						
							
								f83abff8bc 
								
							 
						 
						
							
							
								
								feat(docker): set default Docker to use Ollama ( #1812 )  
							
							
							
						 
						
							2024-04-01 13:08:48 +02:00  
				
					
						
							
							
								 
						
							
								087cb0b7b7 
								
							 
						 
						
							
							
								
								feat(rag): expose similarity_top_k and similarity_score to settings ( #1771 )  
							
							... 
							
							
							
							* Added RAG settings to settings.py, vector_store and chat_service to add similarity_top_k and similarity_score
* Updated settings in vector and chat service per Ivans request
* Updated code for mypy 
							
						 
						
							2024-03-20 22:25:26 +01:00  
				
					
						
							
							
								 
						
							
								774e256052 
								
							 
						 
						
							
							
								
								fix: Fixed docker-compose ( #1758 )  
							
							... 
							
							
							
							* Fixed docker-compose
* Update docker-compose.yaml 
							
						 
						
							2024-03-20 21:36:45 +01:00  
				
					
						
							
							
								 
						
							
								6f6c785dac 
								
							 
						 
						
							
							
								
								feat(llm): Ollama timeout setting ( #1773 )  
							
							... 
							
							
							
							* added request_timeout to ollama, default set to 30.0 in settings.yaml and settings-ollama.yaml
* Update settings-ollama.yaml
* Update settings.yaml
* updated settings.py and tidied up settings-ollama-yaml
* feat(UI): Faster startup and document listing (#1763 )
* fix(ingest): update script label (#1770 )
huggingface -> Hugging Face
* Fix lint errors
---------
Co-authored-by: Stephen Gresham <steve@gresham.id.au>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com> 
							
						 
						
							2024-03-20 21:33:46 +01:00  
				
					
						
							
							
								 
						
							
								c2d694852b 
								
							 
						 
						
							
							
								
								feat: wipe per storage type ( #1772 )  
							
							
							
						 
						
							2024-03-20 21:31:44 +01:00  
				
					
						
							
							
								 
						
							
								7d2de5c96f 
								
							 
						 
						
							
							
								
								fix(ingest): update script label ( #1770 )  
							
							... 
							
							
							
							huggingface -> Hugging Face 
							
						 
						
							2024-03-20 20:23:08 +01:00  
				
					
						
							
							
								 
						
							
								348df781b5 
								
							 
						 
						
							
							
								
								feat(UI): Faster startup and document listing ( #1763 )  
							
							
							
						 
						
							2024-03-20 19:11:44 +01:00  
				
					
						
							
							
								 
						
							
								572518143a 
								
							 
						 
						
							
							
								
								feat(docs): Feature/upgrade docs ( #1741 )  
							
							... 
							
							
							
							* Upgrade fern version
* Add info about SDKs 
							
						 
						
							2024-03-19 21:26:53 +01:00  
				
					
						
							
							
								 
						
							
								134fc54d7d 
								
							 
						 
						
							
							
								
								feat(ingest): Created a faster ingestion mode - pipeline ( #1750 )  
							
							... 
							
							
							
							* Unify pgvector and postgres connection settings
* Remove local changes
* Update file pgvector->postgres
* postgresql should be postgres
* Adding pipeline ingestion mode
* disable hugging face parallelism.  Continue on file to doc transform failure
* Semaphore to limit docq async workers. ETA reporting 
							
						 
						
							2024-03-19 21:24:46 +01:00  
				
					
						
							
							
								 
						
							
								1efac6a3fe 
								
							 
						 
						
							
							
								
								feat(llm - embed): Add support for Azure OpenAI ( #1698 )  
							
							... 
							
							
							
							* Add support for Azure OpenAI
* fix: wrong default api_version
Should be dashes instead of underscores.
see: https://learn.microsoft.com/en-us/azure/ai-services/openai/reference 
* fix: code styling
applied "make check" changes
* refactor: extend documentation
* mention azopenai as available option and extras
* add recommended section
* include settings-azopenai.yaml configuration file
* fix: documentation 
							
						 
						
							2024-03-15 16:49:50 +01:00  
				
					
						
							
							
								 
						
							
								258d02d87c 
								
							 
						 
						
							
							
								
								fix(docs): Minor documentation amendment ( #1739 )  
							
							... 
							
							
							
							* Unify pgvector and postgres connection settings
* Remove local changes
* Update file pgvector->postgres
* postgresql should be postgres 
							
						 
						
							2024-03-15 16:36:32 +01:00  
				
					
						
							
							
								 
						
							
								63de7e4930 
								
							 
						 
						
							
							
								
								feat: unify settings for vector and nodestore connections to PostgreSQL ( #1730 )  
							
							... 
							
							
							
							* Unify pgvector and postgres connection settings
* Remove local changes
* Update file pgvector->postgres 
							
						 
						
							2024-03-15 09:55:17 +01:00  
				
					
						
							
							
								 
						
							
								68b3a34b03 
								
							 
						 
						
							
							
								
								feat(nodestore): add Postgres for the doc and index store ( #1706 )  
							
							... 
							
							
							
							* Adding Postgres for the doc and index store
* Adding documentation.  Rename postgres database local->simple.  Postgres storage dependencies
* Update documentation for postgres storage
* Renaming feature to nodestore
* update docstore -> nodestore in doc
* missed some docstore changes in doc
* Updated poetry.lock
* Formatting updates to pass ruff/black checks
* Correction to unreachable code!
* Format adjustment to pass black test
* Adjust extra inclusion name for vector pg
* extra dep change for pg vector
* storage-postgres -> storage-nodestore-postgres
* Hash change on poetry lock 
							
						 
						
							2024-03-14 17:12:33 +01:00  
				
					
						
							
							
								 
						
							
								d17c34e81a 
								
							 
						 
						
							
							
								
								fix(settings): set default tokenizer to avoid running make setup fail ( #1709 )  
							
							
							
						 
						
							2024-03-13 09:53:40 +01:00  
				
					
						
							
							
								 
						
							
								84ad16af80 
								
							 
						 
						
							
							
								
								feat(docs): upgrade fern ( #1596 )  
							
							
							
						 
						
							2024-03-11 23:02:56 +01:00  
				
					
						
							
							
								 
						
							
								821bca32e9 
								
							 
						 
						
							
							
								
								feat(local): tiktoken cache within repo for offline ( #1467 )  
							
							
							
						 
						
							2024-03-11 22:55:13 +01:00  
				
					
						
							
							
								 
						
							
								02dc83e8e9 
								
							 
						 
						
							
							
								
								feat(llm): adds serveral settings for llamacpp and ollama ( #1703 )  
							
							
							
						 
						
							2024-03-11 22:51:05 +01:00  
				
					
						
							
							
								 
						
							
								410bf7a71f 
								
							 
						 
						
							
							
								
								feat(ui): maintain score order when curating sources ( #1643 )  
							
							... 
							
							
							
							* Update ui.py
Changed 'curated_sources' from a list, in order to maintain score order when returning the curated sources.
* Maintain score order after curating sources 
							
						 
						
							2024-03-11 22:27:30 +01:00  
				
					
						
							
							
								 
						
							
								290b9fb084 
								
							 
						 
						
							
							
								
								feat(ui): add sources check to not repeat identical sources ( #1705 )  
							
							
							
						 
						
							2024-03-11 22:24:18 +01:00  
				
					
						
							
							
								 
						
							
								1b03b369c0 
								
							 
						 
						
							
							
								
								chore(main): release 0.4.0 ( #1628 )  
							
							... 
							
							
							
							Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> 
							
						 
						
							2024-03-06 17:53:35 +01:00  
				
					
						
							
							
								 
						
							
								45f05711eb 
								
							 
						 
						
							
							
								
								feat: Upgrade to LlamaIndex to 0.10 ( #1663 )  
							
							... 
							
							
							
							* Extract optional dependencies
* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity
* Support Ollama embeddings
* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine
* Fix vector retriever filters 
							
						 
						
							2024-03-06 17:51:30 +01:00  
				
					
						
							
							
								 
						
							
								12f3a39e8a 
								
							 
						 
						
							
							
								
								Update x handle to zylon private gpt ( #1644 )  
							
							
							
						 
						
							2024-02-23 15:51:35 +01:00  
				
					
						
							
							
								 
						
							
								cd40e3982b 
								
							 
						 
						
							
							
								
								feat(Vector): support pgvector ( #1624 )  
							
							
							
						 
						
							2024-02-20 15:29:26 +01:00  
				
					
						
							
							
								 
						
							
								066ea5bf28 
								
							 
						 
						
							
							
								
								chore(main): release 0.3.0 ( #1413 )  
							
							... 
							
							
							
							Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> 
							
						 
						
							2024-02-16 17:42:39 +01:00  
				
					
						
							
							
								 
						
							
								aa13afde07 
								
							 
						 
						
							
							
								
								feat(UI): Select file to Query or Delete + Delete ALL ( #1612 )  
							
							... 
							
							
							
							---------
Co-authored-by: Robin Boone <rboone@sofics.com> 
							
						 
						
							2024-02-16 17:36:09 +01:00  
				
					
						
							
							
								 
						
							
								24fb80ca38 
								
							 
						 
						
							
							
								
								fix(UI): Updated ui.py. Frees up the CPU to not be bottlenecked.  
							
							... 
							
							
							
							Updated ui.py to include a small sleep timer while building the stream deltas.  This recursive function fires off so quickly to eats up too much of the CPU.  This small sleep frees up the CPU to not be bottlenecked.  This value can go lower/shorter.  But 0.02 or 0.025 seems to work well. (#1589 )
Co-authored-by: root <root@wesgitlabdemo.icl.gtri.org> 
							
						 
						
							2024-02-16 12:52:14 +01:00  
				
					
						
							
							
								 
						
							
								6bbec79583 
								
							 
						 
						
							
							
								
								feat(llm): Add support for Ollama LLM ( #1526 )  
							
							
							
						 
						
							2024-02-09 15:50:50 +01:00  
				
					
						
							
							
								 
						
							
								b178b51451 
								
							 
						 
						
							
							
								
								feat(bulk-ingest): Add --ignored Flag to Exclude Specific Files and Directories During Ingestion ( #1432 )  
							
							
							
						 
						
							2024-02-07 19:59:32 +01:00  
				
					
						
							
							
								 
						
							
								24fae660e6 
								
							 
						 
						
							
							
								
								feat: Add stream information to generate SDKs ( #1569 )  
							
							
							
						 
						
							2024-02-02 16:14:22 +01:00