Remove explicit model arg in HfApiModel - which wouldn't run without Pro (#373)

* Remove explicit model arg in HfApiModel - which wouldn't run without Pro
This commit is contained in:
Sanjeed 2025-01-30 23:25:09 +05:30 committed by GitHub
parent 023419bf92
commit aa55f137e5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
1 changed files with 2 additions and 7 deletions

View File

@ -127,20 +127,15 @@ The agent will need these arguments upon initialization:
- `model`: the LLM that powers the agent.
Our `model` must be a callable that takes as input a list of messages and returns text. It also needs to accept a stop_sequences argument that indicates when to stop its generation. For convenience, we directly use the HfEngine class provided in the package to get a LLM engine that calls Hugging Face's Inference API.
And we use [meta-llama/Llama-3.3-70B-Instruct](meta-llama/Llama-3.3-70B-Instruct) as the llm engine because:
- It has a long 128k context, which is helpful for processing long source documents
- It is served for free at all times on HF's Inference API!
_Note:_ The Inference API hosts models based on various criteria, and deployed models may be updated or replaced without prior notice. Learn more about it [here](https://huggingface.co/docs/api-inference/supported-models).
>[!NOTE] To use a specific model, pass it like this: `HfApiModel("meta-llama/Llama-3.3-70B-Instruct")`. The Inference API hosts models based on various criteria, and deployed models may be updated or replaced without prior notice. Learn more about it [here](https://huggingface.co/docs/api-inference/supported-models).
```py
from smolagents import HfApiModel, CodeAgent
agent = CodeAgent(
tools=[retriever_tool], model=HfApiModel("meta-llama/Llama-3.3-70B-Instruct"), max_steps=4, verbosity_level=2
tools=[retriever_tool], model=HfApiModel(), max_steps=4, verbosity_level=2
)
```
Upon initializing the CodeAgent, it has been automatically given a default system prompt that tells the LLM engine to process step-by-step and generate tool calls as code snippets, but you could replace this prompt template with your own as needed.
Then when its `.run()` method is launched, the agent takes care of calling the LLM engine, and executing the tool calls, all in a loop that ends only when tool `final_answer` is called with the final answer as its argument.