Commit Graph

  • 24fae660e6
    feat: Add stream information to generate SDKs (#1569) Iván Martínez 2024-02-02 16:14:22 +0100
  • 3e67e21d38
    Add embedding mode config (#1541) Pablo Orgaz 2024-01-25 10:55:32 +0100
  • 869233f0e4
    fix: Adding an LLM param to fix broken generator from llamacpp (#1519) Naveen Kannan 2024-01-17 12:10:45 -0500
  • e326126d0d
    feat: add mistral + chatml prompts (#1426) CognitiveTech 2024-01-16 16:51:14 -0500
  • 6191bcdbd6
    fix: minor bug in chat stream output - python error being serialized (#1449) Robert Gay 2024-01-16 07:41:20 -0800
  • d3acd85fe3
    fix(tests): load the test settings only when running tests Iván Martínez 2024-01-09 12:03:16 +0100
  • 0a89d76cc5
    fix(docs): Update quickstart doc and set version in pyproject.toml to 0.2.0 Guido Schulz 2023-12-26 13:09:31 +0100
  • 2d27a9f956
    feat(llm): Add openailike llm mode (#1447) Matthew Hill 2023-12-26 04:26:08 -0500
  • fee9f08ef3 Move back to 3900 for the context window to avoid melting local machines imartinez 2023-12-22 18:21:43 +0100
  • fde2b942bc
    fix(deploy): fix local and external dockerfiles Iván Martínez 2023-12-22 14:16:46 +0100
  • 4c69c458ab
    Improve ingest logs (#1438) Iván Martínez 2023-12-21 17:13:46 +0100
  • 4780540870
    feat(settings): Configurable context_window and tokenizer (#1437) Iván Martínez 2023-12-21 14:49:35 +0100
  • 6eeb95ec7f
    feat(API): Ingest plain text (#1417) Iván Martínez 2023-12-18 21:47:05 +0100
  • 059f35840a
    fix(docker): docker broken copy (#1419) Pablo Orgaz 2023-12-18 16:55:18 +0100
  • 8ec7cf49f4
    feat(settings): Update default model to TheBloke/Mistral-7B-Instruct-v0.2-GGUF (#1415) Iván Martínez 2023-12-17 16:11:08 +0100
  • c71ae7cee9
    feat(ui): make chat area stretch to fill the screen (#1397) Rohit Das 2023-12-17 16:32:13 +0530
  • 2564f8d2bb
    fix(settings): correct yaml multiline string (#1403) cognitivetech 2023-12-16 13:02:46 -0500
  • 4e496e970a
    docs: remove misleading comment about pgpt working with python 3.12 (#1394) Eliott Bouhana 2023-12-15 21:35:02 +0100
  • 3582764801
    ci: fix preview docs checkout ref (#1393) Federico Grandi 2023-12-12 20:33:34 +0100
  • 1d28ae2915
    docs: fix minor capitalization typo (#1392) Federico Grandi 2023-12-12 20:31:38 +0100
  • e8ac51bba4
    chore(main): release 0.2.0 (#1387) github-actions[bot] 2023-12-10 20:08:12 +0100
  • 145f3ec9f4
    feat(ui): Allows User to Set System Prompt via "Additional Options" in Chat Interface (#1353) 3ly-13 2023-12-10 12:45:14 -0600
  • a072a40a7c
    Allow setting OpenAI model in settings (#1386) 3ly-13 2023-12-09 13:13:00 -0600
  • a3ed14c58f
    feat(llm): drop default_system_prompt (#1385) Louis Melchior 2023-12-08 23:13:51 +0100
  • f235c50be9
    Delete old docs (#1384) Iván Martínez 2023-12-08 22:39:23 +0100
  • 9302620eac
    Adding german speaking model to documentation (#1374) EEmlan 2023-12-08 11:26:25 +0100
  • 9cf972563e
    Add setup option to Makefile (#1368) Max Zangs 2023-12-08 10:34:12 +0100
  • 3d301d0c6f
    chore(main): release 0.1.0 (#1094) github-actions[bot] 2023-12-01 14:45:54 +0100
  • 56af625d71
    Fix the parallel ingestion mode, and make it available through conf (#1336) lopagela 2023-11-30 11:41:55 +0100
  • b7ca7d35a0
    Update ingest api docs with Windows support (#1289) Francisco García Sierra 2023-11-29 20:56:37 +0100
  • 28d03fdda8
    Adding working combination of LLM and Embedding Model to recipes (#1315) ishaandatta 2023-11-30 01:24:22 +0530
  • aabdb046ae
    Add docker compose (#1277) Phi Long 2023-11-29 23:46:40 +0800
  • 64ed9cd872
    Allow passing a system prompt (#1318) Iván Martínez 2023-11-29 15:51:19 +0100
  • 9c192ddd73
    Added max_new_tokens as a config option to llm yaml block (#1317) Gianni Acquisto 2023-11-26 19:17:29 +0100
  • baf29f06fa
    Adding docs about embeddings settings + adding the embedding.mode: local in mock profile (#1316) Gianni Acquisto 2023-11-26 17:32:11 +0100
  • bafdd3baf1
    Ingestion Speedup Multiple strategy (#1309) lopagela 2023-11-25 20:12:09 +0100
  • 546ba33e6f
    Update readme with supporters info (#1311) Iván Martínez 2023-11-25 18:35:59 +0100
  • 944c43bfa8
    Multi language support - fern debug (#1307) Iván Martínez 2023-11-25 14:34:23 +0100
  • e8d88f8952
    Update preview-docs.yml Iván Martínez 2023-11-25 10:14:04 +0100
  • c6d6e0e71b
    Update preview-docs.yml to enable debug Iván Martínez 2023-11-24 17:51:33 +0100
  • 510caa576b
    Make qdrant the default vector db (#1285) Iván Martínez 2023-11-20 16:19:22 +0100
  • f1cbff0fb7
    fix: Windows permission error on ingest service tmp files (#1280) Francisco García Sierra 2023-11-20 10:08:03 +0100
  • a09cd7a892
    Update llama_index to 0.9.3 (#1278) lopagela 2023-11-19 18:49:36 +0100
  • 36f69eed0f
    Refactor documentation architecture (#1264) lopagela 2023-11-19 18:46:09 +0100
  • 57a829a8e8
    Move fern workflows to root workflows folder (#1273) Iván Martínez 2023-11-18 20:47:44 +0100
  • 8af5ed3347
    Delete CNAME Iván Martínez 2023-11-18 20:23:05 +0100
  • 224812f7f6
    Update to gradio 4 and allow upload multiple files at once in UI (#1271) lopagela 2023-11-18 20:19:43 +0100
  • adaa00ccc8
    Fix/readme UI image (#1272) Iván Martínez 2023-11-18 20:19:03 +0100
  • 99dc670df0
    Add badges in the README.md (#1261) lopagela 2023-11-18 18:47:30 +0100
  • f7d7b6cd4b
    Fixed the avatar of the box by using a local file (#1266) lopagela 2023-11-18 12:29:27 +0100
  • 0d520026a3
    fix: Windows 11 failing to auto-delete tmp file (#1260) Pablo Orgaz 2023-11-17 18:23:57 +0100
  • 4197ada626
    feat: enable resume download for hf_hub_download (#1249) Lai Zn 2023-11-17 07:13:11 +0800
  • 09d9a91946
    Create CNAME Iván Martínez 2023-11-17 00:07:50 +0100
  • f339f7608c
    Move Docs to Fern (#1257) Iván Martínez 2023-11-16 23:25:14 +0100
  • ff7e2bc9dd
    Delete CNAME Iván Martínez 2023-11-16 22:53:00 +0100
  • 2a417d2f61
    Fix/qdrant support (#1253) Iván Martínez 2023-11-16 13:29:17 +0100
  • 23fa530c31
    added `wipe` make command (#1215) Dominik Fuchs 2023-11-16 11:44:02 +0100
  • 03d1ae6d70
    feat: Qdrant support (#1228) Anush 2023-11-14 01:53:26 +0530
  • 86fc4781d8
    Fix openai setting literal (#1221) Iván Martínez 2023-11-12 22:29:26 +0100
  • 022bd718e3
    fix: Remove global state (#1216) Pablo Orgaz 2023-11-12 22:20:36 +0100
  • f394ca61bb
    Reuse existing stored index during ingestion (#1220) Iván Martínez 2023-11-12 22:14:38 +0100
  • aa70d3d9f0
    Add simple Basic auth (#1203) lopagela 2023-11-12 19:05:00 +0100
  • b7647542f4
    Curate sources to avoid the UI crashing (#1212) Iván Martínez 2023-11-12 10:59:51 +0100
  • a579c9bdc5
    Update poetry lock (#1209) lopagela 2023-11-11 22:44:19 +0100
  • a22969ad1f
    Add sources to completions APIs and UI (#1206) Iván Martínez 2023-11-11 21:39:15 +0100
  • dbd99e7b4b
    Update description.md (#1107) César García 2023-11-11 09:23:46 +0100
  • 8487440a6f
    Add basic CORS (#1198) lopagela 2023-11-10 14:29:43 +0100
  • a666fd5b73
    Refactor UI state management (#1191) lopagela 2023-11-10 10:42:43 +0100
  • 55e626eac7
    Update README.md Iván Martínez 2023-11-10 10:33:17 +0100
  • c81f4b2ebd
    Search in Docs to UI (#1186) Iván Martínez 2023-11-09 12:44:57 +0100
  • 1e96e3a29e
    Stake issues/PRs not updated in 15 days (#1181) Iván Martínez 2023-11-08 12:07:17 +0100
  • f75f60b234
    Create stale.yml (#1177) Iván Martínez 2023-11-07 15:41:05 +0100
  • 23cd3fea10
    Parse JSON files using llama_index JSONReader (#1176) lopagela 2023-11-07 15:39:40 +0100
  • 0c40cfb115
    Endpoint to delete documents ingested (#1163) lopagela 2023-11-06 15:47:42 +0100
  • 6583dc84c0
    feat: Disable Gradio Analytics (#1165) lopagela 2023-11-06 14:31:26 +0100
  • 0d677e10b9
    feat: move torch and transformers to local group (#1172) Pablo Orgaz 2023-11-06 14:24:16 +0100
  • ad512e3c42
    Feature/sagemaker embedding (#1161) Iván Martínez 2023-11-05 16:16:49 +0100
  • f29df84301
    Disable chromaDB anonymous information collection (#1144) Pierre Marais 2023-11-02 12:45:48 +0100
  • a517a588c4
    fix: sagemaker config and chat methods (#1142) Pablo Orgaz 2023-10-30 21:54:41 +0100
  • b0e258265f
    Improve logging and error handling when ingesting an entire folder (#1132) NetroScript 2023-10-30 21:54:09 +0100
  • 5d1be6e94c
    chore: only generate docker images on demand (#1134) Pablo Orgaz 2023-10-29 21:48:16 +0100
  • 64c5ae214a
    feat: Drop loguru and use builtin `logging` (#1133) lopagela 2023-10-29 19:11:02 +0100
  • 24cfddd60f
    fix: fix pytorch version to avoid wheel bug (#1123) Pablo Orgaz 2023-10-27 20:27:40 +0200
  • 895588b82a
    fix: Docker and sagemaker setup (#1118) Pablo Orgaz 2023-10-27 13:29:29 +0200
  • 768e5ff505
    chore: Update README.md (#1102) Shivam Singh 2023-10-24 16:13:41 +0530
  • 78546524d0
    Use OpenAI for embeddings when openai mode is selected (#1096) Iván Martínez 2023-10-23 10:50:42 +0200
  • 769a047b54
    chore: add GitHub metadata (#1085) Federico Grandi 2023-10-23 10:49:02 +0200
  • ba23443a70
    fix: typo in README.md (#1091) Ikko Eltociear Ashimine 2023-10-23 15:54:12 +0900
  • b8383e00a6
    chore(main): release 0.0.2 (#1088) github-actions[bot] 2023-10-20 18:29:21 +0200
  • f5a9bf4e37
    fix: chromadb max batch size (#1087) Iván Martínez 2023-10-20 18:24:56 +0200
  • b46c1087e2
    chore: add linux instructions and C++ guide (#1082) Pablo Orgaz 2023-10-20 14:36:50 +0200
  • 97d860a7c9
    chore(main): release 0.0.1 (#1086) github-actions[bot] 2023-10-20 13:08:51 +0200
  • 490d93fdc1 chore: Initial version Iván Martínez 2023-10-20 12:57:52 +0200
  • aa4bb17a2e
    fix: make docs more visible (#1081) Pablo Orgaz 2023-10-19 22:12:30 +0200
  • d249a17c33
    feat(ui): add LLM mode to UI (#1080) Iván Martínez 2023-10-19 19:21:29 +0200
  • b7450911b2
    feat: Release GitHub action (#1078) Pablo Orgaz 2023-10-19 17:34:41 +0200
  • 3ad1da019b
    Update README.md Iván Martínez 2023-10-19 16:17:35 +0200
  • 51cc638758
    Next version of PrivateGPT (#1077) Pablo Orgaz 2023-10-19 16:04:35 +0200
  • 78d1ef44ad
    Update README.md Iván Martínez 2023-09-25 16:03:09 +0200
  • 0b5a6687e3
    Merge pull request #999 from imartinez/990-cannot-submit-more-than-166-embeddings-at-once-while-ingesting Iván Martínez 2023-09-25 11:59:19 +0200