337 lines
19 KiB
Markdown
337 lines
19 KiB
Markdown
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
||
|
||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||
the License. You may obtain a copy of the License at
|
||
|
||
http://www.apache.org/licenses/LICENSE-2.0
|
||
|
||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||
specific language governing permissions and limitations under the License.
|
||
|
||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||
rendered properly in your Markdown viewer.
|
||
|
||
-->
|
||
# Agents - Guided tour
|
||
|
||
[[open-in-colab]]
|
||
|
||
In this guided visit, you will learn how to build an agent, how to run it, and how to customize it to make it work better for your use-case.
|
||
|
||
### Building your agent
|
||
|
||
To initialize a minimal agent, you need at least these two arguments:
|
||
|
||
- `model`, a text-generation model to power your agent - because the agent is different from a simple LLM, it is a system that uses a LLM as its engine. You can use any of these options:
|
||
- [`TransformersModel`] takes a pre-initialized `transformers` pipeline to run inference on your local machine using `transformers`.
|
||
- [`HfApiModel`] leverages a `huggingface_hub.InferenceClient` under the hood.
|
||
- [`LiteLLMModel`] lets you call 100+ different models through [LiteLLM](https://docs.litellm.ai/)!
|
||
|
||
- `tools`, A list of `Tools` that the agent can use to solve the task. It can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`.
|
||
|
||
Once you have these two arguments, `tools` and `model`, you can create an agent and run it.
|
||
|
||
```python
|
||
from smolagents import CodeAgent, HfApiModel
|
||
from huggingface_hub import login
|
||
|
||
login("<YOUR_HUGGINGFACEHUB_API_TOKEN>")
|
||
|
||
model_id = "meta-llama/Llama-3.3-70B-Instruct"
|
||
|
||
model = HfApiModel(model_id=model_id)
|
||
agent = CodeAgent(tools=[], model=model, add_base_tools=True)
|
||
|
||
agent.run(
|
||
"Could you give me the 118th number in the Fibonacci sequence?",
|
||
)
|
||
```
|
||
|
||
#### Code execution
|
||
|
||
A Python interpreter executes the code on a set of inputs passed along with your tools.
|
||
This should be safe because the only functions that can be called are the tools you provided (especially if it's only tools by Hugging Face) and a set of predefined safe functions like `print` or functions from the `math` module, so you're already limited in what can be executed.
|
||
|
||
The Python interpreter also doesn't allow imports by default outside of a safe list, so all the most obvious attacks shouldn't be an issue.
|
||
You can authorize additional imports by passing the authorized modules as a list of strings in argument `additional_authorized_imports` upon initialization of your [`CodeAgent`] or [`CodeAgent`]:
|
||
|
||
```py
|
||
from smolagents import CodeAgent
|
||
|
||
agent = CodeAgent(tools=[], model=model, additional_authorized_imports=['requests', 'bs4'])
|
||
agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?")
|
||
```
|
||
This gives you at the end of the agent run:
|
||
```text
|
||
'Hugging Face – Blog'
|
||
```
|
||
The execution will stop at any code trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent. You can also use [E2B code executor](https://e2b.dev/docs#what-is-e2-b) instead of a local Python interpreter by first [setting the `E2B_API_KEY` environment variable](https://e2b.dev/dashboard?tab=keys) and then passing `use_e2b_executor=True` upon agent initialization.
|
||
|
||
> [!WARNING]
|
||
> The LLM can generate arbitrary code that will then be executed: do not add any unsafe imports!
|
||
|
||
### The system prompt
|
||
|
||
Upon initialization of the agent system, a system prompt (attribute `system_prompt`) is built automatically by turning the description extracted from the tools into a predefined system prompt template.
|
||
|
||
But you can customize it!
|
||
|
||
Let's see how it works. For example, check the system prompt for the [`CodeAgent`] (below version is slightly simplified).
|
||
|
||
The prompt and output parser were automatically defined, but you can easily inspect them by calling the `system_prompt_template` on your agent.
|
||
|
||
```python
|
||
print(agent.system_prompt_template)
|
||
```
|
||
Here is what you get:
|
||
```text
|
||
You are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can.
|
||
To do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code.
|
||
To solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.
|
||
|
||
At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use.
|
||
Then in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '<end_code>' sequence.
|
||
During each intermediate step, you can use 'print()' to save whatever important information you will then need.
|
||
These print outputs will then appear in the 'Observation:' field, which will be available as input for the next step.
|
||
In the end you have to return a final answer using the `final_answer` tool.
|
||
|
||
Here are a few examples using notional tools:
|
||
---
|
||
{examples}
|
||
|
||
Above example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools:
|
||
|
||
{{tool_descriptions}}
|
||
|
||
{{managed_agents_descriptions}}
|
||
|
||
Here are the rules you should always follow to solve your task:
|
||
1. Always provide a 'Thought:' sequence, and a 'Code:\n```py' sequence ending with '```<end_code>' sequence, else you will fail.
|
||
2. Use only variables that you have defined!
|
||
3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': "What is the place where James Bond lives?"})', but use the arguments directly as in 'answer = wiki(query="What is the place where James Bond lives?")'.
|
||
4. Take care to not chain too many sequential tool calls in the same code block, especially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block.
|
||
5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters.
|
||
6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'.
|
||
7. Never create any notional variables in our code, as having these in your logs might derail you from the true variables.
|
||
8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}}
|
||
9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist.
|
||
10. Don't give up! You're in charge of solving the task, not providing directions to solve it.
|
||
|
||
Now Begin! If you solve the task correctly, you will receive a reward of $1,000,000.
|
||
```
|
||
|
||
The system prompt includes:
|
||
- An *introduction* that explains how the agent should behave and what tools are.
|
||
- A description of all the tools that is defined by a `{{tool_descriptions}}` token that is dynamically replaced at runtime with the tools defined/chosen by the user.
|
||
- The tool description comes from the tool attributes, `name`, `description`, `inputs` and `output_type`, and a simple `jinja2` template that you can refine.
|
||
- The expected output format.
|
||
|
||
You could improve the system prompt, for example, by adding an explanation of the output format.
|
||
|
||
For maximum flexibility, you can overwrite the whole system prompt template by passing your custom prompt as an argument to the `system_prompt` parameter.
|
||
|
||
```python
|
||
from smolagents import ToolCallingAgent, PythonInterpreterTool, TOOL_CALLING_SYSTEM_PROMPT
|
||
|
||
modified_prompt = TOOL_CALLING_SYSTEM_PROMPT # This is where you can do your modifications
|
||
|
||
agent = ToolCallingAgent(tools=[PythonInterpreterTool()], model=model, system_prompt=modified_prompt)
|
||
```
|
||
|
||
> [!WARNING]
|
||
> Please make sure to define the `{{tool_descriptions}}` string somewhere in the `template` so the agent is aware
|
||
of the available tools.
|
||
|
||
|
||
### Inspecting an agent run
|
||
|
||
Here are a few useful attributes to inspect what happened after a run:
|
||
- `agent.logs` stores the fine-grained logs of the agent. At every step of the agent's run, everything gets stored in a dictionary that then is appended to `agent.logs`.
|
||
- Running `agent.write_inner_memory_from_logs()` creates an inner memory of the agent's logs for the LLM to view, as a list of chat messages. This method goes over each step of the log and only stores what it's interested in as a message: for instance, it will save the system prompt and task in separate messages, then for each step it will store the LLM output as a message, and the tool call output as another message. Use this if you want a higher-level view of what has happened - but not every log will be transcripted by this method.
|
||
|
||
## Tools
|
||
|
||
A tool is an atomic function to be used by an agent. To be used by an LLM, it also needs a few attributes that constitute its API and will be used to describe to the LLM how to call this tool:
|
||
- A name
|
||
- A description
|
||
- Input types and descriptions
|
||
- An output type
|
||
|
||
You can for instance check the [`PythonInterpreterTool`]: it has a name, a description, input descriptions, an output type, and a `__call__` method to perform the action.
|
||
|
||
When the agent is initialized, the tool attributes are used to generate a tool description which is baked into the agent's system prompt. This lets the agent know which tools it can use and why.
|
||
|
||
### Default toolbox
|
||
|
||
Transformers comes with a default toolbox for empowering agents, that you can add to your agent upon initialization with argument `add_base_tools = True`:
|
||
|
||
- **DuckDuckGo web search***: performs a web search using DuckDuckGo browser.
|
||
- **Python code interpreter**: runs your the LLM generated Python code in a secure environment. This tool will only be added to [`ToolCallingAgent`] if you initialize it with `add_base_tools=True`, since code-based agent can already natively execute Python code
|
||
- **Transcriber**: a speech-to-text pipeline built on Whisper-Turbo that transcribes an audio to text.
|
||
|
||
You can manually use a tool by calling the [`load_tool`] function and a task to perform.
|
||
|
||
```python
|
||
from smolagents import load_tool
|
||
|
||
search_tool = load_tool("web_search")
|
||
print(search_tool("Who's the current president of Russia?"))
|
||
```
|
||
|
||
### Create a new tool
|
||
|
||
You can create your own tool for use cases not covered by the default tools from Hugging Face.
|
||
For example, let's create a tool that returns the most downloaded model for a given task from the Hub.
|
||
|
||
You'll start with the code below.
|
||
|
||
```python
|
||
from huggingface_hub import list_models
|
||
|
||
task = "text-classification"
|
||
|
||
most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
|
||
print(most_downloaded_model.id)
|
||
```
|
||
|
||
This code can quickly be converted into a tool, just by wrapping it in a function and adding the `tool` decorator:
|
||
|
||
|
||
```py
|
||
from smolagents import tool
|
||
|
||
@tool
|
||
def model_download_tool(task: str) -> str:
|
||
"""
|
||
This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.
|
||
It returns the name of the checkpoint.
|
||
|
||
Args:
|
||
task: The task for which
|
||
"""
|
||
most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
|
||
return most_downloaded_model.id
|
||
```
|
||
|
||
The function needs:
|
||
- A clear name. The name usually describes what the tool does. Since the code returns the model with the most downloads for a task, let's put `model_download_tool`.
|
||
- Type hints on both inputs and output
|
||
- A description, that includes an 'Args:' part where each argument is described (without a type indication this time, it will be pulled from the type hint).
|
||
All these will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible!
|
||
|
||
> [!TIP]
|
||
> This definition format is the same as tool schemas used in `apply_chat_template`, the only difference is the added `tool` decorator: read more on our tool use API [here](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template).
|
||
|
||
Then you can directly initialize your agent:
|
||
```py
|
||
from smolagents import CodeAgent, HfApiModel
|
||
agent = CodeAgent(tools=[model_download_tool], model=HfApiModel())
|
||
agent.run(
|
||
"Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"
|
||
)
|
||
```
|
||
|
||
You get the following logs:
|
||
```text
|
||
╭──────────────────────────────────────── New run ─────────────────────────────────────────╮
|
||
│ │
|
||
│ Can you give me the name of the model that has the most downloads in the 'text-to-video' │
|
||
│ task on the Hugging Face Hub? │
|
||
│ │
|
||
╰─ HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct ───────────────────────────────────────────╯
|
||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||
╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮
|
||
│ 1 model_name = model_download_tool(task="text-to-video") │
|
||
│ 2 print(model_name) │
|
||
╰──────────────────────────────────────────────────────────────────────────────────────────╯
|
||
Execution logs:
|
||
ByteDance/AnimateDiff-Lightning
|
||
|
||
Out: None
|
||
[Step 0: Duration 0.27 seconds| Input tokens: 2,069 | Output tokens: 60]
|
||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||
╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮
|
||
│ 1 final_answer("ByteDance/AnimateDiff-Lightning") │
|
||
╰──────────────────────────────────────────────────────────────────────────────────────────╯
|
||
Out - Final answer: ByteDance/AnimateDiff-Lightning
|
||
[Step 1: Duration 0.10 seconds| Input tokens: 4,288 | Output tokens: 148]
|
||
Out[20]: 'ByteDance/AnimateDiff-Lightning'
|
||
```
|
||
|
||
This is not the only way to build the tool: you can directly define it as a subclass of [`Tool`], which gives you more flexibility, for instance the possibility to initialize heavy class attributes.
|
||
|
||
Read more in the [dedicated tool tutorial](./tutorials/tools#what-is-a-tool-and-how-to-build-one)
|
||
|
||
## Multi-agents
|
||
|
||
Multi-agent systems have been introduced with Microsoft's framework [Autogen](https://huggingface.co/papers/2308.08155).
|
||
|
||
In this type of framework, you have several agents working together to solve your task instead of only one.
|
||
It empirically yields better performance on most benchmarks. The reason for this better performance is conceptually simple: for many tasks, rather than using a do-it-all system, you would prefer to specialize units on sub-tasks. Here, having agents with separate tool sets and memories allows to achieve efficient specialization. For instance, why fill the memory of the code generating agent with all the content of webpages visited by the web search agent? It's better to keep them separate.
|
||
|
||
You can easily build hierarchical multi-agent systems with `smolagents`.
|
||
|
||
To do so, encapsulate the agent in a [`ManagedAgent`] object. This object needs arguments `agent`, `name`, and a `description`, which will then be embedded in the manager agent's system prompt to let it know how to call this managed agent, as we also do for tools.
|
||
|
||
Here's an example of making an agent that managed a specific web search agent using our [`DuckDuckGoSearchTool`]:
|
||
|
||
```py
|
||
from smolagents import CodeAgent, HfApiModel, DuckDuckGoSearchTool, ManagedAgent
|
||
|
||
model = HfApiModel()
|
||
|
||
web_agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=model)
|
||
|
||
managed_web_agent = ManagedAgent(
|
||
agent=web_agent,
|
||
name="web_search",
|
||
description="Runs web searches for you. Give it your query as an argument."
|
||
)
|
||
|
||
manager_agent = CodeAgent(
|
||
tools=[], model=model, managed_agents=[managed_web_agent]
|
||
)
|
||
|
||
manager_agent.run("Who is the CEO of Hugging Face?")
|
||
```
|
||
|
||
> [!TIP]
|
||
> For an in-depth example of an efficient multi-agent implementation, see [how we pushed our multi-agent system to the top of the GAIA leaderboard](https://huggingface.co/blog/beating-gaia).
|
||
|
||
|
||
## Talk with your agent and visualize its thoughts in a cool Gradio interface
|
||
|
||
You can use `GradioUI` to interactively submit tasks to your agent and observe its thought and execution process, here is an example:
|
||
|
||
```py
|
||
from smolagents import (
|
||
load_tool,
|
||
CodeAgent,
|
||
HfApiModel,
|
||
GradioUI
|
||
)
|
||
|
||
# Import tool from Hub
|
||
image_generation_tool = load_tool("m-ric/text-to-image")
|
||
|
||
model = HfApiModel(model_id)
|
||
|
||
# Initialize the agent with the image generation tool
|
||
agent = CodeAgent(tools=[image_generation_tool], model=model)
|
||
|
||
GradioUI(agent).launch()
|
||
```
|
||
|
||
Under the hood, when the user types a new answer, the agent is launched with `agent.run(user_request, reset=False)`.
|
||
The `reset=False` flag means the agent's memory is not flushed before launching this new task, which lets the conversation go on.
|
||
|
||
You can also use this `reset=False` argument to keep the conversation going in any other agentic application.
|
||
|
||
## Next steps
|
||
|
||
For more in-depth usage, you will then want to check out our tutorials:
|
||
- [the explanation of how our code agents work](./tutorials/secure_code_execution)
|
||
- [this guide on how to build good agents](./tutorials/building_good_agents).
|
||
- [the in-depth guide for tool usage](./tutorials/building_good_agents).
|