Go to file
Aymeric eecd728668 Improve RAG example 2024-12-26 15:33:40 +01:00
docs Improve RAG example 2024-12-26 15:33:40 +01:00
examples Improve RAG example 2024-12-26 15:33:40 +01:00
src/smolagents Fix tool forward args with defaults but no type hint 2024-12-26 12:32:23 +01:00
tests Add tests for models 2024-12-26 11:56:06 +01:00
.gitignore Add E2B doc 2024-12-20 16:50:27 +01:00
Dockerfile Add E2B code interpreter 🥳 2024-12-20 16:20:41 +01:00
LICENSE Initial commit 2024-12-05 12:28:04 +01:00
Makefile Uv config 2024-12-11 16:15:49 +01:00
README.md Update readme 2024-12-26 12:10:17 +01:00
e2b.Dockerfile Add E2B code interpreter 🥳 2024-12-20 16:20:41 +01:00
e2b.toml Add E2B code interpreter 🥳 2024-12-20 16:20:41 +01:00
pyproject.toml Fix tool forward args with defaults but no type hint 2024-12-26 12:32:23 +01:00
requirements.in Update readme 2024-12-26 12:10:17 +01:00
server.py another example 2024-12-17 17:01:34 +01:00

README.md

License Documentation GitHub release Contributor Covenant

🤗 Smolagents - a smol library to build great agents!

Smolagents is a library that enables you to run powerful agents in a few lines of code. It offers:

Simplicity: the logic for agents fits in ~thousand lines of code. We kept abstractions to their minimal shape above raw code!

🌐 Support for any LLM: it supports models hosted on the Hub loaded in their transformers version or through our inference API, but also models from OpenAI, Anthropic, and many more through our LiteLLM integration.

🧑‍💻 First-class support for Code Agents, i.e. agents that write their actions in code (as opposed to "agents being used to write code"), read more here.

🤗 Hub integrations: you can share and load tools to/from the Hub, and more is to come!

Quick demo

First install the package.

pip install smolagents

Then define your agent, give it the tools it needs and run it!

from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel

agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=HfApiModel())

agent.run("What time would the world's fastest car take to travel from New York to San Francisco?")

TODO: Add video

Code agents?

We built agents where the LLM engine writes its actions in code. This approach is demonstrated to work better than the current industry practice of letting the LLM output a dictionary of the tools it wants to calls: uses 30% fewer steps (thus 30% fewer LLM calls) and reaches higher performance on difficult benchmarks. Head to [./conceptual_guides/intro_agents.md] to learn more on that.

Especially, since code execution can be a security concern (arbitrary code execution!), we provide options at runtime:

  • a secure python interpreter to run code more safely in your environment
  • a sandboxed environment.

How lightweight is it?

We strived to keep abstractions to a strict minimum, with the main code in agents.py being roughly 1,000 lines of code, and still being quite complete, with several types of agents implemented: CodeAgent writing its actions in code snippets, and the more classic ToolCallingAgent that leverage built-in tool calling methods.

Many people ask: why use a framework at all? Well, because a big part of this stuff is non-trivial. For instance, the code agent has to keep a consistent format for code throughout its system prompt, its parser, the execution. So our framework handles this complexity for you. But of course we still encourage you to hack into the source code and use only the bits that you need, to the exclusion of everything else!