From 18d826f09def50bc61fcf4ee35ccad5d920e6bfb Mon Sep 17 00:00:00 2001 From: Aymeric Date: Thu, 19 Dec 2024 17:32:51 +0100 Subject: [PATCH] Start ReAct guide --- docs/source/conceptual_guides/react.md | 44 ++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) create mode 100644 docs/source/conceptual_guides/react.md diff --git a/docs/source/conceptual_guides/react.md b/docs/source/conceptual_guides/react.md new file mode 100644 index 0000000..a4b6277 --- /dev/null +++ b/docs/source/conceptual_guides/react.md @@ -0,0 +1,44 @@ + +# ReAct agents + +## One shot agent + +This agent has a planning step, then generates python code to execute all its actions at once. It natively handles different input and output types for its tools, thus it is the recommended choice for multimodal tasks. + +## React agents + +This is the go-to agent to solve reasoning tasks, since the ReAct framework ([Yao et al., 2022](https://huggingface.co/papers/2210.03629)) makes it really efficient to think on the basis of its previous observations. + +We implement two versions of JsonAgent: +- [`JsonAgent`] generates tool calls as a JSON in its output. +- [`CodeAgent`] is a new type of JsonAgent that generates its tool calls as blobs of code, which works really well for LLMs that have strong coding performance. + +> [!TIP] +> Read [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) blog post to learn more about ReAct agents. + +
+ + +
+ +![Framework of a React Agent](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-source-llms-as-agents/ReAct.png) \ No newline at end of file