fix: 503 when private gpt gets ollama service (#2104)

When running private gpt with external ollama API, ollama service
returns 503 on startup because ollama service (traefik) might not be
ready.

- Add healthcheck to ollama service to test for connection to external
ollama
- private-gpt-ollama service depends on ollama being service_healthy

Co-authored-by: Koh Meng Hui <kohmh@duck.com>
This commit is contained in:
meng-hui 2024-10-17 18:44:28 +08:00 committed by GitHub
parent 5851b02378
commit 940bdd49af
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
1 changed files with 8 additions and 1 deletions

View File

@ -29,7 +29,8 @@ services:
- ollama-cuda - ollama-cuda
- ollama-api - ollama-api
depends_on: depends_on:
- ollama ollama:
condition: service_healthy
# Private-GPT service for the local mode # Private-GPT service for the local mode
# This service builds from a local Dockerfile and runs the application in local mode. # This service builds from a local Dockerfile and runs the application in local mode.
@ -60,6 +61,12 @@ services:
# This will route requests to the Ollama service based on the profile. # This will route requests to the Ollama service based on the profile.
ollama: ollama:
image: traefik:v2.10 image: traefik:v2.10
healthcheck:
test: ["CMD", "sh", "-c", "wget -q --spider http://ollama:11434 || exit 1"]
interval: 10s
retries: 3
start_period: 5s
timeout: 5s
ports: ports:
- "8080:8080" - "8080:8080"
command: command: