* Extract optional dependencies * Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity * Support Ollama embeddings * Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine * Fix vector retriever filters |
||
---|---|---|
.. | ||
__init__.py | ||
ingest_router.py | ||
ingest_service.py | ||
ingest_watcher.py | ||
model.py |