From 75a114174347abf7f6a6479c973c699dbd0b2598 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Iv=C3=A1n=20Mart=C3=ADnez?= Date: Mon, 8 May 2023 23:49:54 +0200 Subject: [PATCH] Update README.md Reflect the updated execution flow --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a0da2e1..c270493 100644 --- a/README.md +++ b/README.md @@ -49,7 +49,7 @@ And wait for the script to require your input. > Enter a query: ``` -Hit enter. You'll see the LLM print the context it is using from your documents and then the final answer; you can then ask another question without re-running the script, just wait for the prompt again. +Hit enter. You'll need to wait 20-30 seconds (depending on your machine) while the LLM model consumes the prompt and prepares the answer. Once done, it will print the answer and the 4 sources it used as context from your documents; you can then ask another question without re-running the script, just wait for the prompt again. Note: you could turn off your internet connection, and the script inference would still work. No data gets out of your local environment.