Merge pull request #72 from CakeCrusher/CakeCrusher/guide_fixes
e2b details
This commit is contained in:
commit
07015d12fe
|
@ -23,15 +23,12 @@ In this guided visit, you will learn how to build an agent, how to run it, and h
|
|||
|
||||
To initialize a minimal agent, you need at least these two arguments:
|
||||
|
||||
- An text-generation model to power your agent - because the agent is different from a simple LLM, it is a system that uses a LLM as its engine.
|
||||
- A list of tools from which the agent pick tools to execute
|
||||
- `model`, a text-generation model to power your agent - because the agent is different from a simple LLM, it is a system that uses a LLM as its engine. You can use any of these options:
|
||||
- [`TransformersModel`] takes a pre-initialized `transformers` pipeline to run inference on your local machine using `transformers`.
|
||||
- [`HfApiModel`] leverages a `huggingface_hub.InferenceClient` under the hood.
|
||||
- [`LiteLLMModel`] lets you call 100+ different models through [LiteLLM](https://docs.litellm.ai/)!
|
||||
|
||||
For your model, you can use any of these options:
|
||||
- [`TransformersModel`] takes a pre-initialized `transformers` pipeline to run inference on your local machine using `transformers`.
|
||||
- [`HfApiModel`] leverages a `huggingface_hub.InferenceClient` under the hood.
|
||||
- We also provide [`LiteLLMModel`], which lets you call 100+ different models through [LiteLLM](https://docs.litellm.ai/)!
|
||||
|
||||
You will also need a `tools` argument which accepts a list of `Tools` - it can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`.
|
||||
- `tools`, A list of `Tools` that the agent can use to solve the task. It can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`.
|
||||
|
||||
Once you have these two arguments, `tools` and `model`, you can create an agent and run it.
|
||||
|
||||
|
@ -69,7 +66,7 @@ This gives you at the end of the agent run:
|
|||
```text
|
||||
'Hugging Face – Blog'
|
||||
```
|
||||
The execution will stop at any code trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent. You can also use E2B code executor instead of a local Python interpreter by passing `use_e2b_executor=True` upon agent initialization.
|
||||
The execution will stop at any code trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent. You can also use [E2B code executor](https://e2b.dev/docs#what-is-e2-b) instead of a local Python interpreter by first [setting the `E2B_API_KEY` environment variable](https://e2b.dev/dashboard?tab=keys) and then passing `use_e2b_executor=True` upon agent initialization.
|
||||
|
||||
> [!WARNING]
|
||||
> The LLM can generate arbitrary code that will then be executed: do not add any unsafe imports!
|
||||
|
@ -336,4 +333,4 @@ You can also use this `reset=False` argument to keep the conversation going in a
|
|||
For more in-depth usage, you will then want to check out our tutorials:
|
||||
- [the explanation of how our code agents work](./tutorials/secure_code_execution)
|
||||
- [this guide on how to build good agents](./tutorials/building_good_agents).
|
||||
- [the in-depth guide for tool usage](./tutorials/building_good_agents).
|
||||
- [the in-depth guide for tool usage](./tutorials/building_good_agents).
|
||||
|
|
Loading…
Reference in New Issue