Tool Use for LLMs is the idea of giving a language model the ability to take actions by executing external code.
LangChain is a framework which uses Chain-of-Thought (COT) prompting in order to generate steps for a plan of action and then actually carry out those steps.
A tool is defined as software functionality which can be triggered through matching a Tool class, which is associated with a function. The function code specifies the exact tool functionality.
In the simplest case the return value from a function reachable through a tool is a string.
In this example we use a local LLM running through Ollama: Llama 3.
One simple tool is provided: a function that gets the current time. Normally, an LLM will say it does not have access to a clock when asked the time in a prompt. This tool gives the model that ability.
Overall, the code does the following:
- Define a function that returns the current time
- Define time_tool as a Tool class with a good description of what the tool does and links to the time function
Note that matching the description is probabilistic and not 100% guaranteed. Hence, we make the description as complete and explicit as possible to increase the likelihood of matching. Note also that some models are better than others; LlaMa 3 performs quite well.
- We link the LLM and the tools list together using LangChain’s initialize_agent()
- Run a prompt asking for the current time. The model answering will trigger the use of the time tool, and the return value is used in the final result
The output is set to verbose, so we can see the actions planned and executed.
Before running the code, make sure Ollama is running and Llama 3 is available with:
$ ollama run llama3
Make sure the LangChain libraries required are installed:
$ pip install langchain
$ pip install langchain-community
$ pip install langchain-ollama
Note: at the time of writing using LangChain >= 0.3.x works well.
A specific version install may be required, for example:
$ pip install langchain==0.3.14
Here is the complete code:
from datetime import datetime
from langchain.agents import initialize_agent, AgentType
from langchain.tools import Tool
from langchain_community.chat_models import ChatOllama
def get_time(query: str):
# So we can see when the tool code is reached:
print("get_time() called.")
return "The time is: " + datetime.now().strftime("%H:%M:%S") + "."
time_tool = Tool(
name="TimeTool",
description="Fetch time information. Able to get the current time.",
func=get_time
)
tools = [time_tool]
llm = ChatOllama(model="llama3")
agent = initialize_agent(
tools,
llm,
agent_type=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True
)
query = "What is the time?"
response = agent.invoke(query)
print(response)
Note that the output will not necessarily be consistent between runs.
The following is an example run:
> Entering new AgentExecutor chain...
Let's get started!
Question: What is the time?
Thought: Let's get started!
Thought: Since I need to find out what the current time is,
I should use the TimeTool.
Action: TimeTool
Action Input: None needed for this tool, it just provides
the current time.
get_time() called.
Observation: The time is: 22:42:02.
Thought:Final Answer: The time is: 22:41:59.
> Finished chain.
{'input': 'What is the time?',
'output': 'The time is: 22:41:59.'}