Implementing Anthropic's "think" Tool In TypeScript
Anthropic just released some new research about a simple technique that can help Claude (and likely other LLMs) with tasks requiring complex problem solving.
It's simple—you provide the LLM with a "think" tool it can call.
(Want to know what tools are? Check out this guide.)
The theory is that this tool is useful to give the LLM a moment to think before making a decision. It provides a structured way for the model to reflect on the information it has before proceeding.
This allows the LLM to save important information in its context, which can be used later to make better decisions. It echoes familiar concepts like ReAct and Reflexion.
Let's try implementing it.
The Prompt
We're first going to pull out a description
variable, which will serve as the tool's description:
const description = `Use the tool to think about something.It will not obtain new information or change thedatabase, but just append the thought to the log.Use it when complex reasoning or some cache memoryis needed.`;
This text is pulled from Anthropic's article.
The Tool
Scrollycoding
Let's start with a streamText
call from the AI SDK:
Let's use Claude 3.7 and set a maxSteps
of 10:
Next, let's pass it a tool called think
, passing the description
we defined earlier:
We'll next add a parameters
object to the think
tool.
This object will contain a thought
field, which is a string. The description is also taken from the article.
Finally, let's add an execute
function to the think
tool.
This function won't do anything—it'll simply return the thought
passed to it.
This then gets saved in the context, and in future iterations of the model, it can be used to make better decisions.
import { streamText } from "ai";const result = await streamText({});
Conclusion
And that's it! We've implemented the "think" tool in Claude 3.7 Sonnet.
This is a really useful technique for applying in certain situations. Anthropic recommends it for "complex tasks requiring policy adherence and reasoning in long chains of tool calls." It's a simple addition to your LLM implementation that can yield meaningful improvements in just a few lines of code.
Happy experimenting!